00:00:00.000 Started by upstream project "autotest-per-patch" build number 132854 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.115 Using shallow fetch with depth 1 00:00:00.115 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.115 > git --version # timeout=10 00:00:00.160 > git --version # 'git version 2.39.2' 00:00:00.160 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.704 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.718 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.732 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.732 > git config core.sparsecheckout # timeout=10 00:00:04.745 > git read-tree -mu HEAD # timeout=10 00:00:04.762 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.779 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.779 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.865 [Pipeline] Start of Pipeline 00:00:04.878 [Pipeline] library 00:00:04.879 Loading library shm_lib@master 00:00:04.879 Library shm_lib@master is cached. Copying from home. 00:00:04.894 [Pipeline] node 00:00:04.904 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.906 [Pipeline] { 00:00:04.917 [Pipeline] catchError 00:00:04.918 [Pipeline] { 00:00:04.927 [Pipeline] wrap 00:00:04.933 [Pipeline] { 00:00:04.938 [Pipeline] stage 00:00:04.940 [Pipeline] { (Prologue) 00:00:04.950 [Pipeline] echo 00:00:04.951 Node: VM-host-WFP7 00:00:04.955 [Pipeline] cleanWs 00:00:04.963 [WS-CLEANUP] Deleting project workspace... 00:00:04.964 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.970 [WS-CLEANUP] done 00:00:05.161 [Pipeline] setCustomBuildProperty 00:00:05.237 [Pipeline] httpRequest 00:00:05.605 [Pipeline] echo 00:00:05.607 Sorcerer 10.211.164.20 is alive 00:00:05.617 [Pipeline] retry 00:00:05.619 [Pipeline] { 00:00:05.634 [Pipeline] httpRequest 00:00:05.638 HttpMethod: GET 00:00:05.639 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.639 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.641 Response Code: HTTP/1.1 200 OK 00:00:05.641 Success: Status code 200 is in the accepted range: 200,404 00:00:05.642 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.254 [Pipeline] } 00:00:06.266 [Pipeline] // retry 00:00:06.272 [Pipeline] sh 00:00:06.558 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.570 [Pipeline] httpRequest 00:00:07.009 [Pipeline] echo 00:00:07.011 Sorcerer 10.211.164.20 is alive 00:00:07.017 [Pipeline] retry 00:00:07.018 [Pipeline] { 00:00:07.026 [Pipeline] httpRequest 00:00:07.030 HttpMethod: GET 00:00:07.030 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:07.031 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:07.032 Response Code: HTTP/1.1 200 OK 00:00:07.033 Success: Status code 200 is in the accepted range: 200,404 00:00:07.034 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:28.464 [Pipeline] } 00:00:28.483 [Pipeline] // retry 00:00:28.492 [Pipeline] sh 00:00:28.779 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:31.335 [Pipeline] sh 00:00:31.622 + git -C spdk log --oneline -n5 00:00:31.622 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:31.622 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:31.622 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:31.622 66289a6db build: use VERSION file for storing version 00:00:31.622 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:31.645 [Pipeline] writeFile 00:00:31.661 [Pipeline] sh 00:00:31.949 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.962 [Pipeline] sh 00:00:32.249 + cat autorun-spdk.conf 00:00:32.249 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.249 SPDK_RUN_ASAN=1 00:00:32.249 SPDK_RUN_UBSAN=1 00:00:32.249 SPDK_TEST_RAID=1 00:00:32.249 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.257 RUN_NIGHTLY=0 00:00:32.259 [Pipeline] } 00:00:32.272 [Pipeline] // stage 00:00:32.288 [Pipeline] stage 00:00:32.290 [Pipeline] { (Run VM) 00:00:32.303 [Pipeline] sh 00:00:32.588 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.588 + echo 'Start stage prepare_nvme.sh' 00:00:32.588 Start stage prepare_nvme.sh 00:00:32.588 + [[ -n 5 ]] 00:00:32.588 + disk_prefix=ex5 00:00:32.588 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:32.588 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:32.588 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:32.588 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.588 ++ SPDK_RUN_ASAN=1 00:00:32.588 ++ SPDK_RUN_UBSAN=1 00:00:32.588 ++ SPDK_TEST_RAID=1 00:00:32.588 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.588 ++ RUN_NIGHTLY=0 00:00:32.588 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:32.588 + nvme_files=() 00:00:32.588 + declare -A nvme_files 00:00:32.588 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.588 + nvme_files['nvme.img']=5G 00:00:32.588 + nvme_files['nvme-cmb.img']=5G 00:00:32.588 + nvme_files['nvme-multi0.img']=4G 00:00:32.588 + nvme_files['nvme-multi1.img']=4G 00:00:32.588 + nvme_files['nvme-multi2.img']=4G 00:00:32.588 + nvme_files['nvme-openstack.img']=8G 00:00:32.588 + nvme_files['nvme-zns.img']=5G 00:00:32.588 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.588 + (( SPDK_TEST_FTL == 1 )) 00:00:32.588 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.588 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.588 + for nvme in "${!nvme_files[@]}" 00:00:32.588 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:32.588 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.588 + for nvme in "${!nvme_files[@]}" 00:00:32.588 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:32.588 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.588 + for nvme in "${!nvme_files[@]}" 00:00:32.588 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:32.588 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.588 + for nvme in "${!nvme_files[@]}" 00:00:32.588 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:32.588 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.588 + for nvme in "${!nvme_files[@]}" 00:00:32.588 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:32.588 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.588 + for nvme in "${!nvme_files[@]}" 00:00:32.588 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:32.588 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.588 + for nvme in "${!nvme_files[@]}" 00:00:32.588 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:32.848 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.848 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:32.848 + echo 'End stage prepare_nvme.sh' 00:00:32.848 End stage prepare_nvme.sh 00:00:32.861 [Pipeline] sh 00:00:33.145 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.145 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:33.145 00:00:33.145 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:33.145 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:33.145 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:33.145 HELP=0 00:00:33.145 DRY_RUN=0 00:00:33.145 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:33.145 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.145 NVME_AUTO_CREATE=0 00:00:33.145 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:33.145 NVME_CMB=,, 00:00:33.145 NVME_PMR=,, 00:00:33.145 NVME_ZNS=,, 00:00:33.145 NVME_MS=,, 00:00:33.145 NVME_FDP=,, 00:00:33.145 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.145 SPDK_VAGRANT_VMCPU=10 00:00:33.145 SPDK_VAGRANT_VMRAM=12288 00:00:33.145 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.145 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.145 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.145 SPDK_OPENSTACK_NETWORK=0 00:00:33.145 VAGRANT_PACKAGE_BOX=0 00:00:33.145 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.145 FORCE_DISTRO=true 00:00:33.145 VAGRANT_BOX_VERSION= 00:00:33.146 EXTRA_VAGRANTFILES= 00:00:33.146 NIC_MODEL=virtio 00:00:33.146 00:00:33.146 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:33.146 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:35.054 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.622 ==> default: Creating image (snapshot of base box volume). 00:00:35.883 ==> default: Creating domain with the following settings... 00:00:35.883 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734019021_316cedb4a2d037c526b5 00:00:35.883 ==> default: -- Domain type: kvm 00:00:35.883 ==> default: -- Cpus: 10 00:00:35.883 ==> default: -- Feature: acpi 00:00:35.883 ==> default: -- Feature: apic 00:00:35.883 ==> default: -- Feature: pae 00:00:35.883 ==> default: -- Memory: 12288M 00:00:35.883 ==> default: -- Memory Backing: hugepages: 00:00:35.883 ==> default: -- Management MAC: 00:00:35.883 ==> default: -- Loader: 00:00:35.883 ==> default: -- Nvram: 00:00:35.883 ==> default: -- Base box: spdk/fedora39 00:00:35.883 ==> default: -- Storage pool: default 00:00:35.883 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734019021_316cedb4a2d037c526b5.img (20G) 00:00:35.883 ==> default: -- Volume Cache: default 00:00:35.883 ==> default: -- Kernel: 00:00:35.883 ==> default: -- Initrd: 00:00:35.883 ==> default: -- Graphics Type: vnc 00:00:35.883 ==> default: -- Graphics Port: -1 00:00:35.883 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.883 ==> default: -- Graphics Password: Not defined 00:00:35.883 ==> default: -- Video Type: cirrus 00:00:35.883 ==> default: -- Video VRAM: 9216 00:00:35.883 ==> default: -- Sound Type: 00:00:35.883 ==> default: -- Keymap: en-us 00:00:35.883 ==> default: -- TPM Path: 00:00:35.883 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.883 ==> default: -- Command line args: 00:00:35.883 ==> default: -> value=-device, 00:00:35.883 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.883 ==> default: -> value=-drive, 00:00:35.883 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.883 ==> default: -> value=-device, 00:00:35.883 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.883 ==> default: -> value=-device, 00:00:35.883 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.883 ==> default: -> value=-drive, 00:00:35.883 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.883 ==> default: -> value=-device, 00:00:35.883 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.883 ==> default: -> value=-drive, 00:00:35.883 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.883 ==> default: -> value=-device, 00:00:35.883 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.883 ==> default: -> value=-drive, 00:00:35.883 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.883 ==> default: -> value=-device, 00:00:35.883 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.883 ==> default: Creating shared folders metadata... 00:00:35.883 ==> default: Starting domain. 00:00:37.798 ==> default: Waiting for domain to get an IP address... 00:00:55.905 ==> default: Waiting for SSH to become available... 00:00:55.905 ==> default: Configuring and enabling network interfaces... 00:01:01.190 default: SSH address: 192.168.121.2:22 00:01:01.190 default: SSH username: vagrant 00:01:01.190 default: SSH auth method: private key 00:01:03.731 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:11.879 ==> default: Mounting SSHFS shared folder... 00:01:14.418 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:14.418 ==> default: Checking Mount.. 00:01:15.798 ==> default: Folder Successfully Mounted! 00:01:15.798 ==> default: Running provisioner: file... 00:01:16.737 default: ~/.gitconfig => .gitconfig 00:01:17.306 00:01:17.306 SUCCESS! 00:01:17.306 00:01:17.306 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.306 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.306 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:17.306 00:01:17.316 [Pipeline] } 00:01:17.330 [Pipeline] // stage 00:01:17.339 [Pipeline] dir 00:01:17.339 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:17.341 [Pipeline] { 00:01:17.353 [Pipeline] catchError 00:01:17.355 [Pipeline] { 00:01:17.367 [Pipeline] sh 00:01:17.650 + vagrant ssh-config --host vagrant 00:01:17.650 + sed -ne /^Host/,$p 00:01:17.650 + tee ssh_conf 00:01:20.183 Host vagrant 00:01:20.183 HostName 192.168.121.2 00:01:20.183 User vagrant 00:01:20.183 Port 22 00:01:20.183 UserKnownHostsFile /dev/null 00:01:20.183 StrictHostKeyChecking no 00:01:20.183 PasswordAuthentication no 00:01:20.183 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:20.183 IdentitiesOnly yes 00:01:20.183 LogLevel FATAL 00:01:20.183 ForwardAgent yes 00:01:20.183 ForwardX11 yes 00:01:20.183 00:01:20.197 [Pipeline] withEnv 00:01:20.199 [Pipeline] { 00:01:20.212 [Pipeline] sh 00:01:20.493 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:20.494 source /etc/os-release 00:01:20.494 [[ -e /image.version ]] && img=$(< /image.version) 00:01:20.494 # Minimal, systemd-like check. 00:01:20.494 if [[ -e /.dockerenv ]]; then 00:01:20.494 # Clear garbage from the node's name: 00:01:20.494 # agt-er_autotest_547-896 -> autotest_547-896 00:01:20.494 # $HOSTNAME is the actual container id 00:01:20.494 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:20.494 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:20.494 # We can assume this is a mount from a host where container is running, 00:01:20.494 # so fetch its hostname to easily identify the target swarm worker. 00:01:20.494 container="$(< /etc/hostname) ($agent)" 00:01:20.494 else 00:01:20.494 # Fallback 00:01:20.494 container=$agent 00:01:20.494 fi 00:01:20.494 fi 00:01:20.494 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:20.494 00:01:20.766 [Pipeline] } 00:01:20.781 [Pipeline] // withEnv 00:01:20.790 [Pipeline] setCustomBuildProperty 00:01:20.806 [Pipeline] stage 00:01:20.808 [Pipeline] { (Tests) 00:01:20.824 [Pipeline] sh 00:01:21.105 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:21.378 [Pipeline] sh 00:01:21.748 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:22.023 [Pipeline] timeout 00:01:22.023 Timeout set to expire in 1 hr 30 min 00:01:22.025 [Pipeline] { 00:01:22.039 [Pipeline] sh 00:01:22.321 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.890 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:22.902 [Pipeline] sh 00:01:23.183 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:23.454 [Pipeline] sh 00:01:23.735 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:24.011 [Pipeline] sh 00:01:24.293 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:24.552 ++ readlink -f spdk_repo 00:01:24.552 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:24.552 + [[ -n /home/vagrant/spdk_repo ]] 00:01:24.552 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:24.552 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:24.552 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:24.552 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:24.552 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:24.552 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:24.552 + cd /home/vagrant/spdk_repo 00:01:24.552 + source /etc/os-release 00:01:24.552 ++ NAME='Fedora Linux' 00:01:24.552 ++ VERSION='39 (Cloud Edition)' 00:01:24.552 ++ ID=fedora 00:01:24.552 ++ VERSION_ID=39 00:01:24.552 ++ VERSION_CODENAME= 00:01:24.552 ++ PLATFORM_ID=platform:f39 00:01:24.552 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:24.552 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.552 ++ LOGO=fedora-logo-icon 00:01:24.552 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:24.552 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.552 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:24.552 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.552 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.552 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.552 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:24.553 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.553 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:24.553 ++ SUPPORT_END=2024-11-12 00:01:24.553 ++ VARIANT='Cloud Edition' 00:01:24.553 ++ VARIANT_ID=cloud 00:01:24.553 + uname -a 00:01:24.553 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:24.553 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:25.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:25.121 Hugepages 00:01:25.121 node hugesize free / total 00:01:25.121 node0 1048576kB 0 / 0 00:01:25.121 node0 2048kB 0 / 0 00:01:25.121 00:01:25.121 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.121 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:25.121 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:25.121 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:25.121 + rm -f /tmp/spdk-ld-path 00:01:25.121 + source autorun-spdk.conf 00:01:25.121 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.121 ++ SPDK_RUN_ASAN=1 00:01:25.121 ++ SPDK_RUN_UBSAN=1 00:01:25.121 ++ SPDK_TEST_RAID=1 00:01:25.121 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.121 ++ RUN_NIGHTLY=0 00:01:25.121 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.121 + [[ -n '' ]] 00:01:25.121 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:25.121 + for M in /var/spdk/build-*-manifest.txt 00:01:25.121 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:25.121 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.121 + for M in /var/spdk/build-*-manifest.txt 00:01:25.121 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.121 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.381 + for M in /var/spdk/build-*-manifest.txt 00:01:25.381 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.381 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.381 ++ uname 00:01:25.381 + [[ Linux == \L\i\n\u\x ]] 00:01:25.381 + sudo dmesg -T 00:01:25.381 + sudo dmesg --clear 00:01:25.381 + dmesg_pid=5424 00:01:25.381 + sudo dmesg -Tw 00:01:25.381 + [[ Fedora Linux == FreeBSD ]] 00:01:25.381 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.381 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.381 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.381 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.381 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.381 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.381 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.381 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.381 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.381 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.381 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.381 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.381 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.381 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.381 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:25.381 15:57:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:25.381 15:57:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:25.381 15:57:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.381 15:57:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:25.381 15:57:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:25.381 15:57:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:25.381 15:57:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.381 15:57:51 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:25.381 15:57:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:25.381 15:57:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:25.641 15:57:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:25.641 15:57:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:25.641 15:57:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:25.641 15:57:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.641 15:57:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.641 15:57:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.641 15:57:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.641 15:57:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.641 15:57:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.641 15:57:51 -- paths/export.sh@5 -- $ export PATH 00:01:25.641 15:57:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.641 15:57:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:25.641 15:57:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:25.641 15:57:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734019071.XXXXXX 00:01:25.641 15:57:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734019071.thZbrS 00:01:25.641 15:57:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:25.641 15:57:51 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:25.641 15:57:51 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:25.641 15:57:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:25.641 15:57:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.641 15:57:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:25.641 15:57:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:25.641 15:57:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.641 15:57:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:25.641 15:57:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:25.641 15:57:51 -- pm/common@17 -- $ local monitor 00:01:25.641 15:57:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.641 15:57:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.641 15:57:51 -- pm/common@21 -- $ date +%s 00:01:25.641 15:57:51 -- pm/common@25 -- $ sleep 1 00:01:25.641 15:57:51 -- pm/common@21 -- $ date +%s 00:01:25.641 15:57:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734019071 00:01:25.641 15:57:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734019071 00:01:25.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734019071_collect-cpu-load.pm.log 00:01:25.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734019071_collect-vmstat.pm.log 00:01:26.580 15:57:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:26.580 15:57:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.580 15:57:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.581 15:57:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:26.581 15:57:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.581 Thu Dec 12 03:57:52 PM UTC 2024 00:01:26.581 15:57:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.581 v25.01-rc1-2-ge01cb43b8 00:01:26.581 15:57:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:26.581 15:57:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:26.581 15:57:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:26.581 15:57:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:26.581 15:57:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.581 ************************************ 00:01:26.581 START TEST asan 00:01:26.581 ************************************ 00:01:26.581 using asan 00:01:26.581 15:57:52 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:26.581 00:01:26.581 real 0m0.000s 00:01:26.581 user 0m0.000s 00:01:26.581 sys 0m0.000s 00:01:26.581 15:57:52 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:26.581 15:57:52 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.581 ************************************ 00:01:26.581 END TEST asan 00:01:26.581 ************************************ 00:01:26.581 15:57:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.581 15:57:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.581 15:57:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:26.581 15:57:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:26.581 15:57:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.581 ************************************ 00:01:26.581 START TEST ubsan 00:01:26.581 ************************************ 00:01:26.581 using ubsan 00:01:26.581 15:57:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:26.581 00:01:26.581 real 0m0.000s 00:01:26.581 user 0m0.000s 00:01:26.581 sys 0m0.000s 00:01:26.581 15:57:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:26.581 15:57:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.581 ************************************ 00:01:26.581 END TEST ubsan 00:01:26.581 ************************************ 00:01:26.840 15:57:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.840 15:57:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.840 15:57:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.840 15:57:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:26.840 15:57:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:26.840 15:57:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:26.840 15:57:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:26.840 15:57:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:26.840 15:57:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:26.840 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:26.840 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:27.409 Using 'verbs' RDMA provider 00:01:43.234 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:01.334 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:01.334 Creating mk/config.mk...done. 00:02:01.334 Creating mk/cc.flags.mk...done. 00:02:01.334 Type 'make' to build. 00:02:01.334 15:58:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:01.334 15:58:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:01.334 15:58:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:01.334 15:58:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.334 ************************************ 00:02:01.334 START TEST make 00:02:01.334 ************************************ 00:02:01.334 15:58:25 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:11.315 The Meson build system 00:02:11.315 Version: 1.5.0 00:02:11.315 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:11.315 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:11.315 Build type: native build 00:02:11.315 Program cat found: YES (/usr/bin/cat) 00:02:11.315 Project name: DPDK 00:02:11.315 Project version: 24.03.0 00:02:11.315 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.315 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.315 Host machine cpu family: x86_64 00:02:11.315 Host machine cpu: x86_64 00:02:11.315 Message: ## Building in Developer Mode ## 00:02:11.315 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.315 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.315 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.315 Program python3 found: YES (/usr/bin/python3) 00:02:11.315 Program cat found: YES (/usr/bin/cat) 00:02:11.315 Compiler for C supports arguments -march=native: YES 00:02:11.315 Checking for size of "void *" : 8 00:02:11.315 Checking for size of "void *" : 8 (cached) 00:02:11.315 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:11.315 Library m found: YES 00:02:11.315 Library numa found: YES 00:02:11.315 Has header "numaif.h" : YES 00:02:11.315 Library fdt found: NO 00:02:11.315 Library execinfo found: NO 00:02:11.315 Has header "execinfo.h" : YES 00:02:11.315 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.315 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.315 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.315 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.315 Run-time dependency openssl found: YES 3.1.1 00:02:11.315 Run-time dependency libpcap found: YES 1.10.4 00:02:11.315 Has header "pcap.h" with dependency libpcap: YES 00:02:11.315 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.315 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.315 Compiler for C supports arguments -Wformat: YES 00:02:11.315 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.315 Compiler for C supports arguments -Wformat-security: NO 00:02:11.315 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.315 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.315 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.315 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.315 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.315 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.315 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.315 Compiler for C supports arguments -Wundef: YES 00:02:11.315 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.315 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.315 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.315 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.315 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.315 Program objdump found: YES (/usr/bin/objdump) 00:02:11.315 Compiler for C supports arguments -mavx512f: YES 00:02:11.315 Checking if "AVX512 checking" compiles: YES 00:02:11.315 Fetching value of define "__SSE4_2__" : 1 00:02:11.315 Fetching value of define "__AES__" : 1 00:02:11.315 Fetching value of define "__AVX__" : 1 00:02:11.315 Fetching value of define "__AVX2__" : 1 00:02:11.315 Fetching value of define "__AVX512BW__" : 1 00:02:11.315 Fetching value of define "__AVX512CD__" : 1 00:02:11.315 Fetching value of define "__AVX512DQ__" : 1 00:02:11.315 Fetching value of define "__AVX512F__" : 1 00:02:11.315 Fetching value of define "__AVX512VL__" : 1 00:02:11.315 Fetching value of define "__PCLMUL__" : 1 00:02:11.315 Fetching value of define "__RDRND__" : 1 00:02:11.315 Fetching value of define "__RDSEED__" : 1 00:02:11.315 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.315 Fetching value of define "__znver1__" : (undefined) 00:02:11.315 Fetching value of define "__znver2__" : (undefined) 00:02:11.315 Fetching value of define "__znver3__" : (undefined) 00:02:11.315 Fetching value of define "__znver4__" : (undefined) 00:02:11.315 Library asan found: YES 00:02:11.315 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.315 Message: lib/log: Defining dependency "log" 00:02:11.315 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.315 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.315 Library rt found: YES 00:02:11.315 Checking for function "getentropy" : NO 00:02:11.315 Message: lib/eal: Defining dependency "eal" 00:02:11.315 Message: lib/ring: Defining dependency "ring" 00:02:11.315 Message: lib/rcu: Defining dependency "rcu" 00:02:11.315 Message: lib/mempool: Defining dependency "mempool" 00:02:11.315 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.315 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.315 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.315 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.315 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.315 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.315 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:11.315 Compiler for C supports arguments -mpclmul: YES 00:02:11.315 Compiler for C supports arguments -maes: YES 00:02:11.315 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.315 Compiler for C supports arguments -mavx512bw: YES 00:02:11.315 Compiler for C supports arguments -mavx512dq: YES 00:02:11.315 Compiler for C supports arguments -mavx512vl: YES 00:02:11.315 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.315 Compiler for C supports arguments -mavx2: YES 00:02:11.315 Compiler for C supports arguments -mavx: YES 00:02:11.315 Message: lib/net: Defining dependency "net" 00:02:11.315 Message: lib/meter: Defining dependency "meter" 00:02:11.315 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.315 Message: lib/pci: Defining dependency "pci" 00:02:11.315 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.315 Message: lib/hash: Defining dependency "hash" 00:02:11.315 Message: lib/timer: Defining dependency "timer" 00:02:11.315 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.315 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.315 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.315 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.315 Message: lib/power: Defining dependency "power" 00:02:11.315 Message: lib/reorder: Defining dependency "reorder" 00:02:11.315 Message: lib/security: Defining dependency "security" 00:02:11.315 Has header "linux/userfaultfd.h" : YES 00:02:11.315 Has header "linux/vduse.h" : YES 00:02:11.315 Message: lib/vhost: Defining dependency "vhost" 00:02:11.315 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.315 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.315 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.315 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.315 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.315 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.315 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.315 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.315 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.315 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.315 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:11.315 Configuring doxy-api-html.conf using configuration 00:02:11.315 Configuring doxy-api-man.conf using configuration 00:02:11.315 Program mandb found: YES (/usr/bin/mandb) 00:02:11.316 Program sphinx-build found: NO 00:02:11.316 Configuring rte_build_config.h using configuration 00:02:11.316 Message: 00:02:11.316 ================= 00:02:11.316 Applications Enabled 00:02:11.316 ================= 00:02:11.316 00:02:11.316 apps: 00:02:11.316 00:02:11.316 00:02:11.316 Message: 00:02:11.316 ================= 00:02:11.316 Libraries Enabled 00:02:11.316 ================= 00:02:11.316 00:02:11.316 libs: 00:02:11.316 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.316 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.316 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.316 00:02:11.316 Message: 00:02:11.316 =============== 00:02:11.316 Drivers Enabled 00:02:11.316 =============== 00:02:11.316 00:02:11.316 common: 00:02:11.316 00:02:11.316 bus: 00:02:11.316 pci, vdev, 00:02:11.316 mempool: 00:02:11.316 ring, 00:02:11.316 dma: 00:02:11.316 00:02:11.316 net: 00:02:11.316 00:02:11.316 crypto: 00:02:11.316 00:02:11.316 compress: 00:02:11.316 00:02:11.316 vdpa: 00:02:11.316 00:02:11.316 00:02:11.316 Message: 00:02:11.316 ================= 00:02:11.316 Content Skipped 00:02:11.316 ================= 00:02:11.316 00:02:11.316 apps: 00:02:11.316 dumpcap: explicitly disabled via build config 00:02:11.316 graph: explicitly disabled via build config 00:02:11.316 pdump: explicitly disabled via build config 00:02:11.316 proc-info: explicitly disabled via build config 00:02:11.316 test-acl: explicitly disabled via build config 00:02:11.316 test-bbdev: explicitly disabled via build config 00:02:11.316 test-cmdline: explicitly disabled via build config 00:02:11.316 test-compress-perf: explicitly disabled via build config 00:02:11.316 test-crypto-perf: explicitly disabled via build config 00:02:11.316 test-dma-perf: explicitly disabled via build config 00:02:11.316 test-eventdev: explicitly disabled via build config 00:02:11.316 test-fib: explicitly disabled via build config 00:02:11.316 test-flow-perf: explicitly disabled via build config 00:02:11.316 test-gpudev: explicitly disabled via build config 00:02:11.316 test-mldev: explicitly disabled via build config 00:02:11.316 test-pipeline: explicitly disabled via build config 00:02:11.316 test-pmd: explicitly disabled via build config 00:02:11.316 test-regex: explicitly disabled via build config 00:02:11.316 test-sad: explicitly disabled via build config 00:02:11.316 test-security-perf: explicitly disabled via build config 00:02:11.316 00:02:11.316 libs: 00:02:11.316 argparse: explicitly disabled via build config 00:02:11.316 metrics: explicitly disabled via build config 00:02:11.316 acl: explicitly disabled via build config 00:02:11.316 bbdev: explicitly disabled via build config 00:02:11.316 bitratestats: explicitly disabled via build config 00:02:11.316 bpf: explicitly disabled via build config 00:02:11.316 cfgfile: explicitly disabled via build config 00:02:11.316 distributor: explicitly disabled via build config 00:02:11.316 efd: explicitly disabled via build config 00:02:11.316 eventdev: explicitly disabled via build config 00:02:11.316 dispatcher: explicitly disabled via build config 00:02:11.316 gpudev: explicitly disabled via build config 00:02:11.316 gro: explicitly disabled via build config 00:02:11.316 gso: explicitly disabled via build config 00:02:11.316 ip_frag: explicitly disabled via build config 00:02:11.316 jobstats: explicitly disabled via build config 00:02:11.316 latencystats: explicitly disabled via build config 00:02:11.316 lpm: explicitly disabled via build config 00:02:11.316 member: explicitly disabled via build config 00:02:11.316 pcapng: explicitly disabled via build config 00:02:11.316 rawdev: explicitly disabled via build config 00:02:11.316 regexdev: explicitly disabled via build config 00:02:11.316 mldev: explicitly disabled via build config 00:02:11.316 rib: explicitly disabled via build config 00:02:11.316 sched: explicitly disabled via build config 00:02:11.316 stack: explicitly disabled via build config 00:02:11.316 ipsec: explicitly disabled via build config 00:02:11.316 pdcp: explicitly disabled via build config 00:02:11.316 fib: explicitly disabled via build config 00:02:11.316 port: explicitly disabled via build config 00:02:11.316 pdump: explicitly disabled via build config 00:02:11.316 table: explicitly disabled via build config 00:02:11.316 pipeline: explicitly disabled via build config 00:02:11.316 graph: explicitly disabled via build config 00:02:11.316 node: explicitly disabled via build config 00:02:11.316 00:02:11.316 drivers: 00:02:11.316 common/cpt: not in enabled drivers build config 00:02:11.316 common/dpaax: not in enabled drivers build config 00:02:11.316 common/iavf: not in enabled drivers build config 00:02:11.316 common/idpf: not in enabled drivers build config 00:02:11.316 common/ionic: not in enabled drivers build config 00:02:11.316 common/mvep: not in enabled drivers build config 00:02:11.316 common/octeontx: not in enabled drivers build config 00:02:11.316 bus/auxiliary: not in enabled drivers build config 00:02:11.316 bus/cdx: not in enabled drivers build config 00:02:11.316 bus/dpaa: not in enabled drivers build config 00:02:11.316 bus/fslmc: not in enabled drivers build config 00:02:11.316 bus/ifpga: not in enabled drivers build config 00:02:11.316 bus/platform: not in enabled drivers build config 00:02:11.316 bus/uacce: not in enabled drivers build config 00:02:11.316 bus/vmbus: not in enabled drivers build config 00:02:11.316 common/cnxk: not in enabled drivers build config 00:02:11.316 common/mlx5: not in enabled drivers build config 00:02:11.316 common/nfp: not in enabled drivers build config 00:02:11.316 common/nitrox: not in enabled drivers build config 00:02:11.316 common/qat: not in enabled drivers build config 00:02:11.316 common/sfc_efx: not in enabled drivers build config 00:02:11.316 mempool/bucket: not in enabled drivers build config 00:02:11.316 mempool/cnxk: not in enabled drivers build config 00:02:11.316 mempool/dpaa: not in enabled drivers build config 00:02:11.316 mempool/dpaa2: not in enabled drivers build config 00:02:11.316 mempool/octeontx: not in enabled drivers build config 00:02:11.316 mempool/stack: not in enabled drivers build config 00:02:11.316 dma/cnxk: not in enabled drivers build config 00:02:11.316 dma/dpaa: not in enabled drivers build config 00:02:11.316 dma/dpaa2: not in enabled drivers build config 00:02:11.316 dma/hisilicon: not in enabled drivers build config 00:02:11.316 dma/idxd: not in enabled drivers build config 00:02:11.316 dma/ioat: not in enabled drivers build config 00:02:11.316 dma/skeleton: not in enabled drivers build config 00:02:11.316 net/af_packet: not in enabled drivers build config 00:02:11.316 net/af_xdp: not in enabled drivers build config 00:02:11.316 net/ark: not in enabled drivers build config 00:02:11.316 net/atlantic: not in enabled drivers build config 00:02:11.316 net/avp: not in enabled drivers build config 00:02:11.316 net/axgbe: not in enabled drivers build config 00:02:11.316 net/bnx2x: not in enabled drivers build config 00:02:11.316 net/bnxt: not in enabled drivers build config 00:02:11.316 net/bonding: not in enabled drivers build config 00:02:11.316 net/cnxk: not in enabled drivers build config 00:02:11.316 net/cpfl: not in enabled drivers build config 00:02:11.316 net/cxgbe: not in enabled drivers build config 00:02:11.316 net/dpaa: not in enabled drivers build config 00:02:11.316 net/dpaa2: not in enabled drivers build config 00:02:11.316 net/e1000: not in enabled drivers build config 00:02:11.316 net/ena: not in enabled drivers build config 00:02:11.316 net/enetc: not in enabled drivers build config 00:02:11.316 net/enetfec: not in enabled drivers build config 00:02:11.316 net/enic: not in enabled drivers build config 00:02:11.316 net/failsafe: not in enabled drivers build config 00:02:11.316 net/fm10k: not in enabled drivers build config 00:02:11.316 net/gve: not in enabled drivers build config 00:02:11.316 net/hinic: not in enabled drivers build config 00:02:11.316 net/hns3: not in enabled drivers build config 00:02:11.316 net/i40e: not in enabled drivers build config 00:02:11.316 net/iavf: not in enabled drivers build config 00:02:11.316 net/ice: not in enabled drivers build config 00:02:11.316 net/idpf: not in enabled drivers build config 00:02:11.316 net/igc: not in enabled drivers build config 00:02:11.316 net/ionic: not in enabled drivers build config 00:02:11.316 net/ipn3ke: not in enabled drivers build config 00:02:11.316 net/ixgbe: not in enabled drivers build config 00:02:11.316 net/mana: not in enabled drivers build config 00:02:11.316 net/memif: not in enabled drivers build config 00:02:11.316 net/mlx4: not in enabled drivers build config 00:02:11.316 net/mlx5: not in enabled drivers build config 00:02:11.316 net/mvneta: not in enabled drivers build config 00:02:11.316 net/mvpp2: not in enabled drivers build config 00:02:11.316 net/netvsc: not in enabled drivers build config 00:02:11.316 net/nfb: not in enabled drivers build config 00:02:11.316 net/nfp: not in enabled drivers build config 00:02:11.316 net/ngbe: not in enabled drivers build config 00:02:11.316 net/null: not in enabled drivers build config 00:02:11.316 net/octeontx: not in enabled drivers build config 00:02:11.316 net/octeon_ep: not in enabled drivers build config 00:02:11.316 net/pcap: not in enabled drivers build config 00:02:11.316 net/pfe: not in enabled drivers build config 00:02:11.316 net/qede: not in enabled drivers build config 00:02:11.316 net/ring: not in enabled drivers build config 00:02:11.316 net/sfc: not in enabled drivers build config 00:02:11.316 net/softnic: not in enabled drivers build config 00:02:11.316 net/tap: not in enabled drivers build config 00:02:11.316 net/thunderx: not in enabled drivers build config 00:02:11.316 net/txgbe: not in enabled drivers build config 00:02:11.316 net/vdev_netvsc: not in enabled drivers build config 00:02:11.316 net/vhost: not in enabled drivers build config 00:02:11.316 net/virtio: not in enabled drivers build config 00:02:11.316 net/vmxnet3: not in enabled drivers build config 00:02:11.316 raw/*: missing internal dependency, "rawdev" 00:02:11.316 crypto/armv8: not in enabled drivers build config 00:02:11.316 crypto/bcmfs: not in enabled drivers build config 00:02:11.316 crypto/caam_jr: not in enabled drivers build config 00:02:11.317 crypto/ccp: not in enabled drivers build config 00:02:11.317 crypto/cnxk: not in enabled drivers build config 00:02:11.317 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.317 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.317 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.317 crypto/mlx5: not in enabled drivers build config 00:02:11.317 crypto/mvsam: not in enabled drivers build config 00:02:11.317 crypto/nitrox: not in enabled drivers build config 00:02:11.317 crypto/null: not in enabled drivers build config 00:02:11.317 crypto/octeontx: not in enabled drivers build config 00:02:11.317 crypto/openssl: not in enabled drivers build config 00:02:11.317 crypto/scheduler: not in enabled drivers build config 00:02:11.317 crypto/uadk: not in enabled drivers build config 00:02:11.317 crypto/virtio: not in enabled drivers build config 00:02:11.317 compress/isal: not in enabled drivers build config 00:02:11.317 compress/mlx5: not in enabled drivers build config 00:02:11.317 compress/nitrox: not in enabled drivers build config 00:02:11.317 compress/octeontx: not in enabled drivers build config 00:02:11.317 compress/zlib: not in enabled drivers build config 00:02:11.317 regex/*: missing internal dependency, "regexdev" 00:02:11.317 ml/*: missing internal dependency, "mldev" 00:02:11.317 vdpa/ifc: not in enabled drivers build config 00:02:11.317 vdpa/mlx5: not in enabled drivers build config 00:02:11.317 vdpa/nfp: not in enabled drivers build config 00:02:11.317 vdpa/sfc: not in enabled drivers build config 00:02:11.317 event/*: missing internal dependency, "eventdev" 00:02:11.317 baseband/*: missing internal dependency, "bbdev" 00:02:11.317 gpu/*: missing internal dependency, "gpudev" 00:02:11.317 00:02:11.317 00:02:11.317 Build targets in project: 85 00:02:11.317 00:02:11.317 DPDK 24.03.0 00:02:11.317 00:02:11.317 User defined options 00:02:11.317 buildtype : debug 00:02:11.317 default_library : shared 00:02:11.317 libdir : lib 00:02:11.317 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.317 b_sanitize : address 00:02:11.317 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:11.317 c_link_args : 00:02:11.317 cpu_instruction_set: native 00:02:11.317 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:11.317 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:11.317 enable_docs : false 00:02:11.317 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:11.317 enable_kmods : false 00:02:11.317 max_lcores : 128 00:02:11.317 tests : false 00:02:11.317 00:02:11.317 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.317 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:11.317 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.317 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.317 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.317 [4/268] Linking static target lib/librte_log.a 00:02:11.317 [5/268] Linking static target lib/librte_kvargs.a 00:02:11.317 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.317 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.317 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.317 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.317 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.317 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.581 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.581 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.581 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.581 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.581 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.581 [17/268] Linking static target lib/librte_telemetry.a 00:02:11.581 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.844 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.844 [20/268] Linking target lib/librte_log.so.24.1 00:02:11.844 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.104 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.104 [23/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:12.104 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.104 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.104 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.104 [27/268] Linking target lib/librte_kvargs.so.24.1 00:02:12.104 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.104 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.364 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.364 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.364 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.364 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.364 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.624 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:12.624 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.624 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:12.624 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.624 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.624 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.882 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:12.882 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.882 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.883 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.883 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.883 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.883 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.143 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.143 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.143 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.402 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.402 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.661 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.661 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.661 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.661 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.661 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.661 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.661 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.661 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.661 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.924 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.924 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.193 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.193 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.193 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.193 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.193 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.193 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.193 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.461 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.461 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.461 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.461 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.461 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.721 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.721 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.721 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.980 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.980 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.980 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.980 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.980 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.980 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.980 [85/268] Linking static target lib/librte_eal.a 00:02:15.240 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.240 [87/268] Linking static target lib/librte_ring.a 00:02:15.240 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.240 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.499 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.499 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.499 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.499 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.499 [94/268] Linking static target lib/librte_rcu.a 00:02:15.758 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.758 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.758 [97/268] Linking static target lib/librte_mempool.a 00:02:15.758 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.758 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.759 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.018 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.018 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.018 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.018 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.018 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.278 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.278 [107/268] Linking static target lib/librte_mbuf.a 00:02:16.278 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.537 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.537 [110/268] Linking static target lib/librte_meter.a 00:02:16.537 [111/268] Linking static target lib/librte_net.a 00:02:16.537 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.537 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.537 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.797 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.797 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.797 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.797 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.797 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.057 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.316 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.316 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.576 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.576 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.576 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.576 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.576 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.576 [128/268] Linking static target lib/librte_pci.a 00:02:17.576 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.836 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.836 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.836 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.836 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.836 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.836 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.836 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.095 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.095 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.095 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.095 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.095 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.095 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:18.095 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.095 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.095 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.095 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.095 [147/268] Linking static target lib/librte_cmdline.a 00:02:18.355 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.355 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.614 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.614 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.614 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.877 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.877 [154/268] Linking static target lib/librte_timer.a 00:02:18.877 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.159 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.159 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.159 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.159 [159/268] Linking static target lib/librte_ethdev.a 00:02:19.159 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.159 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.159 [162/268] Linking static target lib/librte_hash.a 00:02:19.418 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.418 [164/268] Linking static target lib/librte_compressdev.a 00:02:19.418 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.418 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.418 [167/268] Linking static target lib/librte_dmadev.a 00:02:19.677 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.677 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.677 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.677 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.936 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.936 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.195 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.195 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.195 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.195 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.195 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.195 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.453 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.454 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.454 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.454 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.454 [184/268] Linking static target lib/librte_cryptodev.a 00:02:20.712 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.712 [186/268] Linking static target lib/librte_power.a 00:02:20.971 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.971 [188/268] Linking static target lib/librte_reorder.a 00:02:20.971 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.971 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.971 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.971 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.971 [193/268] Linking static target lib/librte_security.a 00:02:21.538 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.538 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.797 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.797 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.797 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:21.797 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.056 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.057 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.057 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.316 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.316 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.316 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.575 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.575 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.575 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.575 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.575 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.835 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.835 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.835 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.835 [214/268] Linking static target drivers/librte_bus_pci.a 00:02:22.835 [215/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.835 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.835 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.835 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.835 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.835 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.835 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.095 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.095 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.095 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.095 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:23.354 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.354 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.293 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.670 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.670 [230/268] Linking target lib/librte_eal.so.24.1 00:02:25.929 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.929 [232/268] Linking target lib/librte_timer.so.24.1 00:02:25.929 [233/268] Linking target lib/librte_meter.so.24.1 00:02:25.929 [234/268] Linking target lib/librte_pci.so.24.1 00:02:25.929 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:25.929 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.929 [237/268] Linking target lib/librte_ring.so.24.1 00:02:25.929 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.929 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.929 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.929 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.929 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.188 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:26.188 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.188 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:26.188 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.188 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.188 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.188 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:26.447 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.447 [251/268] Linking target lib/librte_net.so.24.1 00:02:26.447 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:26.447 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:26.447 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:26.706 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.706 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.706 [257/268] Linking target lib/librte_security.so.24.1 00:02:26.706 [258/268] Linking target lib/librte_hash.so.24.1 00:02:26.706 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.706 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:28.083 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.083 [262/268] Linking static target lib/librte_vhost.a 00:02:28.083 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.083 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:28.083 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:28.341 [266/268] Linking target lib/librte_power.so.24.1 00:02:30.249 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.508 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:30.508 INFO: autodetecting backend as ninja 00:02:30.508 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:52.431 CC lib/log/log.o 00:02:52.431 CC lib/log/log_deprecated.o 00:02:52.431 CC lib/ut/ut.o 00:02:52.431 CC lib/log/log_flags.o 00:02:52.431 CC lib/ut_mock/mock.o 00:02:52.431 LIB libspdk_ut.a 00:02:52.431 LIB libspdk_ut_mock.a 00:02:52.431 SO libspdk_ut_mock.so.6.0 00:02:52.431 SO libspdk_ut.so.2.0 00:02:52.431 LIB libspdk_log.a 00:02:52.431 SO libspdk_log.so.7.1 00:02:52.431 SYMLINK libspdk_ut.so 00:02:52.431 SYMLINK libspdk_ut_mock.so 00:02:52.431 SYMLINK libspdk_log.so 00:02:52.690 CC lib/ioat/ioat.o 00:02:52.690 CXX lib/trace_parser/trace.o 00:02:52.690 CC lib/dma/dma.o 00:02:52.690 CC lib/util/base64.o 00:02:52.690 CC lib/util/bit_array.o 00:02:52.690 CC lib/util/crc16.o 00:02:52.690 CC lib/util/cpuset.o 00:02:52.690 CC lib/util/crc32.o 00:02:52.690 CC lib/util/crc32c.o 00:02:52.949 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.949 CC lib/util/crc32_ieee.o 00:02:52.949 CC lib/util/crc64.o 00:02:52.949 CC lib/vfio_user/host/vfio_user.o 00:02:52.949 CC lib/util/dif.o 00:02:52.949 CC lib/util/fd.o 00:02:52.949 CC lib/util/fd_group.o 00:02:53.206 LIB libspdk_dma.a 00:02:53.206 CC lib/util/file.o 00:02:53.206 SO libspdk_dma.so.5.0 00:02:53.206 CC lib/util/hexlify.o 00:02:53.206 CC lib/util/iov.o 00:02:53.206 SYMLINK libspdk_dma.so 00:02:53.206 CC lib/util/math.o 00:02:53.206 CC lib/util/net.o 00:02:53.206 LIB libspdk_vfio_user.a 00:02:53.206 LIB libspdk_ioat.a 00:02:53.206 SO libspdk_ioat.so.7.0 00:02:53.206 SO libspdk_vfio_user.so.5.0 00:02:53.206 CC lib/util/pipe.o 00:02:53.206 SYMLINK libspdk_ioat.so 00:02:53.206 SYMLINK libspdk_vfio_user.so 00:02:53.464 CC lib/util/string.o 00:02:53.464 CC lib/util/strerror_tls.o 00:02:53.464 CC lib/util/uuid.o 00:02:53.464 CC lib/util/xor.o 00:02:53.464 CC lib/util/zipf.o 00:02:53.464 CC lib/util/md5.o 00:02:54.031 LIB libspdk_util.a 00:02:54.031 LIB libspdk_trace_parser.a 00:02:54.031 SO libspdk_util.so.10.1 00:02:54.031 SO libspdk_trace_parser.so.6.0 00:02:54.031 SYMLINK libspdk_util.so 00:02:54.031 SYMLINK libspdk_trace_parser.so 00:02:54.290 CC lib/json/json_parse.o 00:02:54.290 CC lib/json/json_util.o 00:02:54.290 CC lib/json/json_write.o 00:02:54.290 CC lib/rdma_utils/rdma_utils.o 00:02:54.290 CC lib/idxd/idxd.o 00:02:54.290 CC lib/idxd/idxd_user.o 00:02:54.290 CC lib/idxd/idxd_kernel.o 00:02:54.290 CC lib/env_dpdk/env.o 00:02:54.290 CC lib/vmd/vmd.o 00:02:54.290 CC lib/conf/conf.o 00:02:54.549 CC lib/env_dpdk/memory.o 00:02:54.549 CC lib/vmd/led.o 00:02:54.549 CC lib/env_dpdk/pci.o 00:02:54.549 LIB libspdk_conf.a 00:02:54.807 CC lib/env_dpdk/init.o 00:02:54.807 LIB libspdk_rdma_utils.a 00:02:54.807 SO libspdk_conf.so.6.0 00:02:54.807 LIB libspdk_json.a 00:02:54.807 SO libspdk_rdma_utils.so.1.0 00:02:54.807 SO libspdk_json.so.6.0 00:02:54.807 SYMLINK libspdk_conf.so 00:02:54.807 SYMLINK libspdk_rdma_utils.so 00:02:54.807 CC lib/env_dpdk/threads.o 00:02:54.807 CC lib/env_dpdk/pci_ioat.o 00:02:54.807 CC lib/env_dpdk/pci_virtio.o 00:02:54.807 SYMLINK libspdk_json.so 00:02:55.066 CC lib/env_dpdk/pci_vmd.o 00:02:55.066 CC lib/env_dpdk/pci_idxd.o 00:02:55.066 CC lib/env_dpdk/pci_event.o 00:02:55.066 CC lib/rdma_provider/common.o 00:02:55.066 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.066 CC lib/env_dpdk/sigbus_handler.o 00:02:55.066 CC lib/env_dpdk/pci_dpdk.o 00:02:55.066 LIB libspdk_idxd.a 00:02:55.324 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.324 LIB libspdk_vmd.a 00:02:55.324 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.324 SO libspdk_idxd.so.12.1 00:02:55.324 SO libspdk_vmd.so.6.0 00:02:55.324 SYMLINK libspdk_idxd.so 00:02:55.324 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.324 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.324 LIB libspdk_rdma_provider.a 00:02:55.324 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.324 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.324 SYMLINK libspdk_vmd.so 00:02:55.324 SO libspdk_rdma_provider.so.7.0 00:02:55.324 SYMLINK libspdk_rdma_provider.so 00:02:55.583 LIB libspdk_jsonrpc.a 00:02:55.841 SO libspdk_jsonrpc.so.6.0 00:02:55.841 SYMLINK libspdk_jsonrpc.so 00:02:56.409 CC lib/rpc/rpc.o 00:02:56.409 LIB libspdk_env_dpdk.a 00:02:56.409 SO libspdk_env_dpdk.so.15.1 00:02:56.409 LIB libspdk_rpc.a 00:02:56.668 SO libspdk_rpc.so.6.0 00:02:56.668 SYMLINK libspdk_rpc.so 00:02:56.668 SYMLINK libspdk_env_dpdk.so 00:02:56.927 CC lib/keyring/keyring.o 00:02:56.927 CC lib/keyring/keyring_rpc.o 00:02:56.927 CC lib/notify/notify_rpc.o 00:02:56.927 CC lib/notify/notify.o 00:02:56.927 CC lib/trace/trace_flags.o 00:02:56.927 CC lib/trace/trace.o 00:02:56.927 CC lib/trace/trace_rpc.o 00:02:57.185 LIB libspdk_notify.a 00:02:57.185 SO libspdk_notify.so.6.0 00:02:57.185 LIB libspdk_keyring.a 00:02:57.443 LIB libspdk_trace.a 00:02:57.443 SO libspdk_keyring.so.2.0 00:02:57.443 SYMLINK libspdk_notify.so 00:02:57.443 SO libspdk_trace.so.11.0 00:02:57.443 SYMLINK libspdk_keyring.so 00:02:57.443 SYMLINK libspdk_trace.so 00:02:57.742 CC lib/thread/iobuf.o 00:02:57.742 CC lib/thread/thread.o 00:02:57.742 CC lib/sock/sock.o 00:02:57.742 CC lib/sock/sock_rpc.o 00:02:58.309 LIB libspdk_sock.a 00:02:58.309 SO libspdk_sock.so.10.0 00:02:58.567 SYMLINK libspdk_sock.so 00:02:58.826 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.826 CC lib/nvme/nvme_fabric.o 00:02:58.826 CC lib/nvme/nvme_ctrlr.o 00:02:58.826 CC lib/nvme/nvme_pcie.o 00:02:58.826 CC lib/nvme/nvme_ns_cmd.o 00:02:58.826 CC lib/nvme/nvme_ns.o 00:02:58.826 CC lib/nvme/nvme.o 00:02:58.826 CC lib/nvme/nvme_qpair.o 00:02:58.826 CC lib/nvme/nvme_pcie_common.o 00:02:59.761 CC lib/nvme/nvme_quirks.o 00:02:59.761 CC lib/nvme/nvme_transport.o 00:02:59.761 CC lib/nvme/nvme_discovery.o 00:02:59.761 LIB libspdk_thread.a 00:02:59.761 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.761 SO libspdk_thread.so.11.0 00:03:00.019 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:00.019 CC lib/nvme/nvme_tcp.o 00:03:00.019 SYMLINK libspdk_thread.so 00:03:00.019 CC lib/nvme/nvme_opal.o 00:03:00.019 CC lib/nvme/nvme_io_msg.o 00:03:00.278 CC lib/nvme/nvme_poll_group.o 00:03:00.278 CC lib/nvme/nvme_zns.o 00:03:00.278 CC lib/nvme/nvme_stubs.o 00:03:00.537 CC lib/nvme/nvme_auth.o 00:03:00.537 CC lib/nvme/nvme_cuse.o 00:03:00.537 CC lib/nvme/nvme_rdma.o 00:03:00.795 CC lib/accel/accel.o 00:03:00.795 CC lib/blob/blobstore.o 00:03:00.795 CC lib/blob/request.o 00:03:01.054 CC lib/blob/zeroes.o 00:03:01.054 CC lib/blob/blob_bs_dev.o 00:03:01.054 CC lib/accel/accel_rpc.o 00:03:01.313 CC lib/accel/accel_sw.o 00:03:01.572 CC lib/init/json_config.o 00:03:01.572 CC lib/init/subsystem.o 00:03:01.572 CC lib/init/subsystem_rpc.o 00:03:01.572 CC lib/virtio/virtio.o 00:03:01.573 CC lib/fsdev/fsdev.o 00:03:01.831 CC lib/fsdev/fsdev_io.o 00:03:01.831 CC lib/init/rpc.o 00:03:01.831 CC lib/fsdev/fsdev_rpc.o 00:03:01.831 CC lib/virtio/virtio_vhost_user.o 00:03:01.831 CC lib/virtio/virtio_vfio_user.o 00:03:02.091 CC lib/virtio/virtio_pci.o 00:03:02.091 LIB libspdk_init.a 00:03:02.091 SO libspdk_init.so.6.0 00:03:02.091 SYMLINK libspdk_init.so 00:03:02.350 LIB libspdk_accel.a 00:03:02.350 SO libspdk_accel.so.16.0 00:03:02.350 LIB libspdk_virtio.a 00:03:02.350 CC lib/event/app.o 00:03:02.350 CC lib/event/reactor.o 00:03:02.350 CC lib/event/log_rpc.o 00:03:02.350 CC lib/event/scheduler_static.o 00:03:02.350 CC lib/event/app_rpc.o 00:03:02.350 SYMLINK libspdk_accel.so 00:03:02.350 SO libspdk_virtio.so.7.0 00:03:02.350 LIB libspdk_nvme.a 00:03:02.609 LIB libspdk_fsdev.a 00:03:02.609 SO libspdk_fsdev.so.2.0 00:03:02.609 SYMLINK libspdk_virtio.so 00:03:02.609 SYMLINK libspdk_fsdev.so 00:03:02.609 SO libspdk_nvme.so.15.0 00:03:02.609 CC lib/bdev/bdev.o 00:03:02.609 CC lib/bdev/bdev_rpc.o 00:03:02.609 CC lib/bdev/bdev_zone.o 00:03:02.609 CC lib/bdev/part.o 00:03:02.609 CC lib/bdev/scsi_nvme.o 00:03:02.866 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:02.866 SYMLINK libspdk_nvme.so 00:03:03.124 LIB libspdk_event.a 00:03:03.124 SO libspdk_event.so.14.0 00:03:03.124 SYMLINK libspdk_event.so 00:03:03.691 LIB libspdk_fuse_dispatcher.a 00:03:03.691 SO libspdk_fuse_dispatcher.so.1.0 00:03:03.691 SYMLINK libspdk_fuse_dispatcher.so 00:03:05.072 LIB libspdk_blob.a 00:03:05.333 SO libspdk_blob.so.12.0 00:03:05.333 SYMLINK libspdk_blob.so 00:03:05.899 CC lib/blobfs/blobfs.o 00:03:05.899 CC lib/blobfs/tree.o 00:03:05.899 CC lib/lvol/lvol.o 00:03:06.157 LIB libspdk_bdev.a 00:03:06.415 SO libspdk_bdev.so.17.0 00:03:06.415 SYMLINK libspdk_bdev.so 00:03:06.673 CC lib/ftl/ftl_core.o 00:03:06.673 CC lib/ftl/ftl_init.o 00:03:06.673 CC lib/ftl/ftl_layout.o 00:03:06.673 CC lib/ftl/ftl_debug.o 00:03:06.673 CC lib/nvmf/ctrlr.o 00:03:06.673 CC lib/nbd/nbd.o 00:03:06.673 CC lib/scsi/dev.o 00:03:06.673 CC lib/ublk/ublk.o 00:03:06.673 LIB libspdk_blobfs.a 00:03:06.932 SO libspdk_blobfs.so.11.0 00:03:06.932 SYMLINK libspdk_blobfs.so 00:03:06.932 CC lib/ublk/ublk_rpc.o 00:03:06.932 CC lib/ftl/ftl_io.o 00:03:06.932 CC lib/ftl/ftl_sb.o 00:03:06.932 CC lib/scsi/lun.o 00:03:07.190 CC lib/ftl/ftl_l2p.o 00:03:07.190 CC lib/nvmf/ctrlr_discovery.o 00:03:07.190 LIB libspdk_lvol.a 00:03:07.190 SO libspdk_lvol.so.11.0 00:03:07.190 CC lib/nbd/nbd_rpc.o 00:03:07.190 CC lib/scsi/port.o 00:03:07.190 SYMLINK libspdk_lvol.so 00:03:07.190 CC lib/ftl/ftl_l2p_flat.o 00:03:07.190 CC lib/ftl/ftl_nv_cache.o 00:03:07.190 CC lib/ftl/ftl_band.o 00:03:07.190 CC lib/ftl/ftl_band_ops.o 00:03:07.190 CC lib/ftl/ftl_writer.o 00:03:07.449 CC lib/scsi/scsi.o 00:03:07.449 LIB libspdk_nbd.a 00:03:07.449 SO libspdk_nbd.so.7.0 00:03:07.449 CC lib/nvmf/ctrlr_bdev.o 00:03:07.449 SYMLINK libspdk_nbd.so 00:03:07.449 CC lib/ftl/ftl_rq.o 00:03:07.449 LIB libspdk_ublk.a 00:03:07.449 CC lib/scsi/scsi_bdev.o 00:03:07.449 SO libspdk_ublk.so.3.0 00:03:07.707 CC lib/ftl/ftl_reloc.o 00:03:07.707 SYMLINK libspdk_ublk.so 00:03:07.707 CC lib/nvmf/subsystem.o 00:03:07.707 CC lib/ftl/ftl_l2p_cache.o 00:03:07.707 CC lib/ftl/ftl_p2l.o 00:03:07.707 CC lib/ftl/ftl_p2l_log.o 00:03:07.707 CC lib/ftl/mngt/ftl_mngt.o 00:03:07.965 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.965 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.223 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.223 CC lib/scsi/scsi_pr.o 00:03:08.223 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.223 CC lib/nvmf/nvmf.o 00:03:08.223 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.223 CC lib/scsi/scsi_rpc.o 00:03:08.223 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.481 CC lib/nvmf/nvmf_rpc.o 00:03:08.481 CC lib/nvmf/transport.o 00:03:08.481 CC lib/scsi/task.o 00:03:08.481 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.481 CC lib/nvmf/tcp.o 00:03:08.481 CC lib/nvmf/stubs.o 00:03:08.481 CC lib/nvmf/mdns_server.o 00:03:08.740 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.740 LIB libspdk_scsi.a 00:03:08.740 SO libspdk_scsi.so.9.0 00:03:08.998 CC lib/nvmf/rdma.o 00:03:08.998 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.998 SYMLINK libspdk_scsi.so 00:03:08.998 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.998 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.256 CC lib/nvmf/auth.o 00:03:09.257 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.257 CC lib/ftl/utils/ftl_conf.o 00:03:09.257 CC lib/ftl/utils/ftl_md.o 00:03:09.515 CC lib/ftl/utils/ftl_mempool.o 00:03:09.515 CC lib/iscsi/conn.o 00:03:09.515 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.515 CC lib/vhost/vhost.o 00:03:09.516 CC lib/iscsi/init_grp.o 00:03:09.516 CC lib/vhost/vhost_rpc.o 00:03:09.516 CC lib/iscsi/iscsi.o 00:03:09.775 CC lib/ftl/utils/ftl_property.o 00:03:09.775 CC lib/iscsi/param.o 00:03:09.775 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.034 CC lib/vhost/vhost_scsi.o 00:03:10.034 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.293 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.293 CC lib/iscsi/portal_grp.o 00:03:10.293 CC lib/iscsi/tgt_node.o 00:03:10.293 CC lib/iscsi/iscsi_subsystem.o 00:03:10.293 CC lib/iscsi/iscsi_rpc.o 00:03:10.293 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.553 CC lib/iscsi/task.o 00:03:10.553 CC lib/vhost/vhost_blk.o 00:03:10.553 CC lib/vhost/rte_vhost_user.o 00:03:10.553 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.812 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.812 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:10.812 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.812 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.812 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:10.812 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:10.812 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:11.077 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:11.077 CC lib/ftl/base/ftl_base_dev.o 00:03:11.077 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.077 CC lib/ftl/ftl_trace.o 00:03:11.343 LIB libspdk_iscsi.a 00:03:11.343 LIB libspdk_ftl.a 00:03:11.603 SO libspdk_iscsi.so.8.0 00:03:11.603 SO libspdk_ftl.so.9.0 00:03:11.603 SYMLINK libspdk_iscsi.so 00:03:11.603 LIB libspdk_vhost.a 00:03:11.862 LIB libspdk_nvmf.a 00:03:11.862 SO libspdk_vhost.so.8.0 00:03:11.862 SYMLINK libspdk_vhost.so 00:03:11.862 SO libspdk_nvmf.so.20.0 00:03:11.862 SYMLINK libspdk_ftl.so 00:03:12.122 SYMLINK libspdk_nvmf.so 00:03:12.690 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.690 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.690 CC module/sock/posix/posix.o 00:03:12.690 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.690 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.690 CC module/accel/ioat/accel_ioat.o 00:03:12.690 CC module/blob/bdev/blob_bdev.o 00:03:12.690 CC module/keyring/file/keyring.o 00:03:12.690 CC module/fsdev/aio/fsdev_aio.o 00:03:12.690 CC module/accel/error/accel_error.o 00:03:12.690 LIB libspdk_env_dpdk_rpc.a 00:03:12.690 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.948 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.948 CC module/accel/error/accel_error_rpc.o 00:03:12.948 CC module/keyring/file/keyring_rpc.o 00:03:12.948 LIB libspdk_scheduler_gscheduler.a 00:03:12.948 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.948 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.948 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.948 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.948 LIB libspdk_scheduler_dynamic.a 00:03:12.948 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.948 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.948 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.948 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:12.948 LIB libspdk_accel_error.a 00:03:12.948 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.948 LIB libspdk_keyring_file.a 00:03:12.948 LIB libspdk_blob_bdev.a 00:03:12.948 SO libspdk_accel_error.so.2.0 00:03:12.948 SO libspdk_keyring_file.so.2.0 00:03:12.948 LIB libspdk_accel_ioat.a 00:03:12.948 SO libspdk_blob_bdev.so.12.0 00:03:13.207 SO libspdk_accel_ioat.so.6.0 00:03:13.207 SYMLINK libspdk_keyring_file.so 00:03:13.207 SYMLINK libspdk_accel_error.so 00:03:13.207 CC module/fsdev/aio/linux_aio_mgr.o 00:03:13.207 SYMLINK libspdk_blob_bdev.so 00:03:13.207 CC module/keyring/linux/keyring.o 00:03:13.207 CC module/keyring/linux/keyring_rpc.o 00:03:13.207 SYMLINK libspdk_accel_ioat.so 00:03:13.207 CC module/accel/dsa/accel_dsa.o 00:03:13.207 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.207 CC module/accel/iaa/accel_iaa.o 00:03:13.207 LIB libspdk_keyring_linux.a 00:03:13.465 SO libspdk_keyring_linux.so.1.0 00:03:13.465 SYMLINK libspdk_keyring_linux.so 00:03:13.465 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.465 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.465 CC module/bdev/delay/vbdev_delay.o 00:03:13.465 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.465 CC module/bdev/error/vbdev_error.o 00:03:13.465 LIB libspdk_accel_dsa.a 00:03:13.465 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.465 CC module/bdev/gpt/gpt.o 00:03:13.465 SO libspdk_accel_dsa.so.5.0 00:03:13.465 LIB libspdk_fsdev_aio.a 00:03:13.465 LIB libspdk_accel_iaa.a 00:03:13.724 SO libspdk_accel_iaa.so.3.0 00:03:13.724 LIB libspdk_sock_posix.a 00:03:13.724 SO libspdk_fsdev_aio.so.1.0 00:03:13.724 SYMLINK libspdk_accel_dsa.so 00:03:13.724 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.724 SO libspdk_sock_posix.so.6.0 00:03:13.724 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.724 SYMLINK libspdk_accel_iaa.so 00:03:13.724 SYMLINK libspdk_fsdev_aio.so 00:03:13.724 SYMLINK libspdk_sock_posix.so 00:03:13.724 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.724 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.724 LIB libspdk_bdev_error.a 00:03:13.724 LIB libspdk_blobfs_bdev.a 00:03:13.724 CC module/bdev/malloc/bdev_malloc.o 00:03:13.724 CC module/bdev/null/bdev_null.o 00:03:13.724 SO libspdk_bdev_error.so.6.0 00:03:13.982 SO libspdk_blobfs_bdev.so.6.0 00:03:13.982 CC module/bdev/nvme/bdev_nvme.o 00:03:13.982 LIB libspdk_bdev_delay.a 00:03:13.982 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.982 SYMLINK libspdk_bdev_error.so 00:03:13.982 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.982 SO libspdk_bdev_delay.so.6.0 00:03:13.982 SYMLINK libspdk_blobfs_bdev.so 00:03:13.982 SYMLINK libspdk_bdev_delay.so 00:03:13.982 LIB libspdk_bdev_gpt.a 00:03:13.982 SO libspdk_bdev_gpt.so.6.0 00:03:14.241 CC module/bdev/raid/bdev_raid.o 00:03:14.241 CC module/bdev/null/bdev_null_rpc.o 00:03:14.241 SYMLINK libspdk_bdev_gpt.so 00:03:14.241 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.241 CC module/bdev/split/vbdev_split.o 00:03:14.241 LIB libspdk_bdev_lvol.a 00:03:14.241 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.241 SO libspdk_bdev_lvol.so.6.0 00:03:14.241 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.241 LIB libspdk_bdev_malloc.a 00:03:14.241 SYMLINK libspdk_bdev_lvol.so 00:03:14.241 CC module/bdev/aio/bdev_aio.o 00:03:14.241 LIB libspdk_bdev_null.a 00:03:14.241 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.241 SO libspdk_bdev_malloc.so.6.0 00:03:14.241 SO libspdk_bdev_null.so.6.0 00:03:14.500 SYMLINK libspdk_bdev_malloc.so 00:03:14.500 CC module/bdev/aio/bdev_aio_rpc.o 00:03:14.500 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.500 LIB libspdk_bdev_passthru.a 00:03:14.500 SYMLINK libspdk_bdev_null.so 00:03:14.500 SO libspdk_bdev_passthru.so.6.0 00:03:14.500 SYMLINK libspdk_bdev_passthru.so 00:03:14.500 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.500 CC module/bdev/nvme/nvme_rpc.o 00:03:14.500 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.500 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.500 CC module/bdev/ftl/bdev_ftl.o 00:03:14.500 LIB libspdk_bdev_split.a 00:03:14.758 SO libspdk_bdev_split.so.6.0 00:03:14.758 LIB libspdk_bdev_zone_block.a 00:03:14.758 SO libspdk_bdev_zone_block.so.6.0 00:03:14.758 LIB libspdk_bdev_aio.a 00:03:14.758 SYMLINK libspdk_bdev_split.so 00:03:14.758 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:14.758 SO libspdk_bdev_aio.so.6.0 00:03:14.758 CC module/bdev/nvme/vbdev_opal.o 00:03:14.758 SYMLINK libspdk_bdev_zone_block.so 00:03:14.758 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.758 SYMLINK libspdk_bdev_aio.so 00:03:14.758 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.017 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.017 LIB libspdk_bdev_ftl.a 00:03:15.017 SO libspdk_bdev_ftl.so.6.0 00:03:15.017 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.017 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.017 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.017 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.017 SYMLINK libspdk_bdev_ftl.so 00:03:15.017 CC module/bdev/raid/raid0.o 00:03:15.017 CC module/bdev/raid/raid1.o 00:03:15.275 LIB libspdk_bdev_iscsi.a 00:03:15.275 SO libspdk_bdev_iscsi.so.6.0 00:03:15.275 CC module/bdev/raid/concat.o 00:03:15.275 CC module/bdev/raid/raid5f.o 00:03:15.275 SYMLINK libspdk_bdev_iscsi.so 00:03:15.534 LIB libspdk_bdev_virtio.a 00:03:15.793 SO libspdk_bdev_virtio.so.6.0 00:03:15.793 SYMLINK libspdk_bdev_virtio.so 00:03:15.793 LIB libspdk_bdev_raid.a 00:03:16.053 SO libspdk_bdev_raid.so.6.0 00:03:16.053 SYMLINK libspdk_bdev_raid.so 00:03:16.990 LIB libspdk_bdev_nvme.a 00:03:17.250 SO libspdk_bdev_nvme.so.7.1 00:03:17.250 SYMLINK libspdk_bdev_nvme.so 00:03:18.187 CC module/event/subsystems/vmd/vmd.o 00:03:18.187 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.187 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.187 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.187 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.187 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.187 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.187 CC module/event/subsystems/sock/sock.o 00:03:18.187 CC module/event/subsystems/keyring/keyring.o 00:03:18.187 LIB libspdk_event_fsdev.a 00:03:18.187 LIB libspdk_event_scheduler.a 00:03:18.187 LIB libspdk_event_vmd.a 00:03:18.187 LIB libspdk_event_sock.a 00:03:18.187 LIB libspdk_event_vhost_blk.a 00:03:18.187 LIB libspdk_event_keyring.a 00:03:18.187 SO libspdk_event_fsdev.so.1.0 00:03:18.187 LIB libspdk_event_iobuf.a 00:03:18.187 SO libspdk_event_scheduler.so.4.0 00:03:18.187 SO libspdk_event_sock.so.5.0 00:03:18.187 SO libspdk_event_vmd.so.6.0 00:03:18.187 SO libspdk_event_keyring.so.1.0 00:03:18.187 SO libspdk_event_vhost_blk.so.3.0 00:03:18.187 SO libspdk_event_iobuf.so.3.0 00:03:18.187 SYMLINK libspdk_event_fsdev.so 00:03:18.187 SYMLINK libspdk_event_scheduler.so 00:03:18.187 SYMLINK libspdk_event_keyring.so 00:03:18.187 SYMLINK libspdk_event_vhost_blk.so 00:03:18.187 SYMLINK libspdk_event_vmd.so 00:03:18.187 SYMLINK libspdk_event_sock.so 00:03:18.187 SYMLINK libspdk_event_iobuf.so 00:03:18.756 CC module/event/subsystems/accel/accel.o 00:03:18.756 LIB libspdk_event_accel.a 00:03:18.756 SO libspdk_event_accel.so.6.0 00:03:19.016 SYMLINK libspdk_event_accel.so 00:03:19.276 CC module/event/subsystems/bdev/bdev.o 00:03:19.535 LIB libspdk_event_bdev.a 00:03:19.535 SO libspdk_event_bdev.so.6.0 00:03:19.535 SYMLINK libspdk_event_bdev.so 00:03:20.103 CC module/event/subsystems/scsi/scsi.o 00:03:20.103 CC module/event/subsystems/ublk/ublk.o 00:03:20.103 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.103 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.103 CC module/event/subsystems/nbd/nbd.o 00:03:20.103 LIB libspdk_event_scsi.a 00:03:20.103 LIB libspdk_event_ublk.a 00:03:20.103 LIB libspdk_event_nbd.a 00:03:20.103 SO libspdk_event_scsi.so.6.0 00:03:20.103 SO libspdk_event_ublk.so.3.0 00:03:20.363 SO libspdk_event_nbd.so.6.0 00:03:20.363 SYMLINK libspdk_event_ublk.so 00:03:20.363 LIB libspdk_event_nvmf.a 00:03:20.363 SYMLINK libspdk_event_nbd.so 00:03:20.363 SYMLINK libspdk_event_scsi.so 00:03:20.363 SO libspdk_event_nvmf.so.6.0 00:03:20.363 SYMLINK libspdk_event_nvmf.so 00:03:20.622 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.622 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.881 LIB libspdk_event_vhost_scsi.a 00:03:20.881 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.881 LIB libspdk_event_iscsi.a 00:03:20.881 SO libspdk_event_iscsi.so.6.0 00:03:20.881 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.881 SYMLINK libspdk_event_iscsi.so 00:03:21.172 SO libspdk.so.6.0 00:03:21.172 SYMLINK libspdk.so 00:03:21.431 CC app/trace_record/trace_record.o 00:03:21.431 CC app/spdk_nvme_perf/perf.o 00:03:21.431 CC app/spdk_lspci/spdk_lspci.o 00:03:21.431 CXX app/trace/trace.o 00:03:21.431 CC app/spdk_nvme_identify/identify.o 00:03:21.431 CC app/nvmf_tgt/nvmf_main.o 00:03:21.690 CC app/spdk_tgt/spdk_tgt.o 00:03:21.690 CC examples/util/zipf/zipf.o 00:03:21.690 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.690 CC test/thread/poller_perf/poller_perf.o 00:03:21.690 LINK spdk_lspci 00:03:21.690 LINK nvmf_tgt 00:03:21.690 LINK zipf 00:03:21.690 LINK spdk_tgt 00:03:21.690 LINK spdk_trace_record 00:03:21.690 LINK poller_perf 00:03:21.690 LINK iscsi_tgt 00:03:21.950 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.950 LINK spdk_trace 00:03:21.950 CC app/spdk_top/spdk_top.o 00:03:22.209 CC app/spdk_dd/spdk_dd.o 00:03:22.209 CC examples/ioat/perf/perf.o 00:03:22.209 LINK spdk_nvme_discover 00:03:22.209 CC test/dma/test_dma/test_dma.o 00:03:22.209 CC app/fio/nvme/fio_plugin.o 00:03:22.209 CC examples/ioat/verify/verify.o 00:03:22.209 CC examples/vmd/lsvmd/lsvmd.o 00:03:22.469 LINK ioat_perf 00:03:22.469 LINK lsvmd 00:03:22.469 LINK verify 00:03:22.469 LINK spdk_nvme_perf 00:03:22.469 CC examples/idxd/perf/perf.o 00:03:22.469 LINK spdk_nvme_identify 00:03:22.469 LINK spdk_dd 00:03:22.728 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:22.728 CC examples/vmd/led/led.o 00:03:22.728 LINK led 00:03:22.728 LINK test_dma 00:03:22.728 CC app/vhost/vhost.o 00:03:22.728 LINK interrupt_tgt 00:03:22.728 CC app/fio/bdev/fio_plugin.o 00:03:22.728 CC examples/sock/hello_world/hello_sock.o 00:03:22.728 CC examples/thread/thread/thread_ex.o 00:03:22.728 LINK idxd_perf 00:03:22.728 LINK spdk_nvme 00:03:22.987 LINK vhost 00:03:22.987 TEST_HEADER include/spdk/accel.h 00:03:22.987 TEST_HEADER include/spdk/accel_module.h 00:03:22.987 TEST_HEADER include/spdk/assert.h 00:03:22.987 TEST_HEADER include/spdk/barrier.h 00:03:22.987 TEST_HEADER include/spdk/base64.h 00:03:22.987 TEST_HEADER include/spdk/bdev.h 00:03:22.987 TEST_HEADER include/spdk/bdev_module.h 00:03:22.987 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.987 TEST_HEADER include/spdk/bit_array.h 00:03:22.987 TEST_HEADER include/spdk/bit_pool.h 00:03:22.987 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.987 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.987 TEST_HEADER include/spdk/blobfs.h 00:03:22.987 TEST_HEADER include/spdk/blob.h 00:03:22.987 TEST_HEADER include/spdk/conf.h 00:03:22.987 LINK spdk_top 00:03:22.987 TEST_HEADER include/spdk/config.h 00:03:22.987 TEST_HEADER include/spdk/cpuset.h 00:03:22.987 TEST_HEADER include/spdk/crc16.h 00:03:22.987 TEST_HEADER include/spdk/crc32.h 00:03:22.987 TEST_HEADER include/spdk/crc64.h 00:03:22.987 TEST_HEADER include/spdk/dif.h 00:03:22.987 TEST_HEADER include/spdk/dma.h 00:03:22.987 TEST_HEADER include/spdk/endian.h 00:03:22.987 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.987 TEST_HEADER include/spdk/env.h 00:03:22.987 LINK thread 00:03:22.987 TEST_HEADER include/spdk/event.h 00:03:22.987 TEST_HEADER include/spdk/fd_group.h 00:03:22.987 TEST_HEADER include/spdk/fd.h 00:03:22.987 TEST_HEADER include/spdk/file.h 00:03:22.987 TEST_HEADER include/spdk/fsdev.h 00:03:22.987 LINK hello_sock 00:03:22.987 TEST_HEADER include/spdk/fsdev_module.h 00:03:22.987 TEST_HEADER include/spdk/ftl.h 00:03:22.987 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.987 TEST_HEADER include/spdk/hexlify.h 00:03:22.987 TEST_HEADER include/spdk/histogram_data.h 00:03:23.247 TEST_HEADER include/spdk/idxd.h 00:03:23.247 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.247 TEST_HEADER include/spdk/init.h 00:03:23.247 TEST_HEADER include/spdk/ioat.h 00:03:23.247 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.247 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.247 TEST_HEADER include/spdk/json.h 00:03:23.247 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.247 TEST_HEADER include/spdk/keyring.h 00:03:23.247 TEST_HEADER include/spdk/keyring_module.h 00:03:23.247 TEST_HEADER include/spdk/likely.h 00:03:23.247 TEST_HEADER include/spdk/log.h 00:03:23.247 TEST_HEADER include/spdk/lvol.h 00:03:23.247 TEST_HEADER include/spdk/md5.h 00:03:23.247 TEST_HEADER include/spdk/memory.h 00:03:23.247 TEST_HEADER include/spdk/mmio.h 00:03:23.247 TEST_HEADER include/spdk/nbd.h 00:03:23.247 TEST_HEADER include/spdk/net.h 00:03:23.247 TEST_HEADER include/spdk/notify.h 00:03:23.247 TEST_HEADER include/spdk/nvme.h 00:03:23.247 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.247 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.247 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.247 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.247 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.247 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.247 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.247 TEST_HEADER include/spdk/nvmf.h 00:03:23.247 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.247 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.247 TEST_HEADER include/spdk/opal.h 00:03:23.247 TEST_HEADER include/spdk/opal_spec.h 00:03:23.247 TEST_HEADER include/spdk/pci_ids.h 00:03:23.247 TEST_HEADER include/spdk/pipe.h 00:03:23.247 TEST_HEADER include/spdk/queue.h 00:03:23.247 TEST_HEADER include/spdk/reduce.h 00:03:23.247 TEST_HEADER include/spdk/rpc.h 00:03:23.247 CC test/app/bdev_svc/bdev_svc.o 00:03:23.247 TEST_HEADER include/spdk/scheduler.h 00:03:23.247 CC test/nvme/aer/aer.o 00:03:23.247 TEST_HEADER include/spdk/scsi.h 00:03:23.247 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.247 TEST_HEADER include/spdk/sock.h 00:03:23.247 TEST_HEADER include/spdk/stdinc.h 00:03:23.247 TEST_HEADER include/spdk/string.h 00:03:23.247 CC test/event/event_perf/event_perf.o 00:03:23.247 TEST_HEADER include/spdk/thread.h 00:03:23.248 TEST_HEADER include/spdk/trace.h 00:03:23.248 TEST_HEADER include/spdk/trace_parser.h 00:03:23.248 TEST_HEADER include/spdk/tree.h 00:03:23.248 TEST_HEADER include/spdk/ublk.h 00:03:23.248 TEST_HEADER include/spdk/util.h 00:03:23.248 TEST_HEADER include/spdk/uuid.h 00:03:23.248 TEST_HEADER include/spdk/version.h 00:03:23.248 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.248 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.248 TEST_HEADER include/spdk/vhost.h 00:03:23.248 TEST_HEADER include/spdk/vmd.h 00:03:23.248 TEST_HEADER include/spdk/xor.h 00:03:23.248 TEST_HEADER include/spdk/zipf.h 00:03:23.248 CXX test/cpp_headers/accel.o 00:03:23.248 CXX test/cpp_headers/accel_module.o 00:03:23.248 CXX test/cpp_headers/assert.o 00:03:23.248 CXX test/cpp_headers/barrier.o 00:03:23.248 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.248 LINK event_perf 00:03:23.248 LINK bdev_svc 00:03:23.506 LINK spdk_bdev 00:03:23.506 CXX test/cpp_headers/base64.o 00:03:23.506 CC test/nvme/reset/reset.o 00:03:23.506 CC examples/nvme/hello_world/hello_world.o 00:03:23.506 LINK aer 00:03:23.506 CXX test/cpp_headers/bdev.o 00:03:23.506 CC test/event/reactor/reactor.o 00:03:23.506 CC examples/accel/perf/accel_perf.o 00:03:23.764 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.764 CC examples/blob/hello_world/hello_blob.o 00:03:23.764 LINK reactor 00:03:23.764 LINK hello_world 00:03:23.764 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:23.764 LINK reset 00:03:23.764 CXX test/cpp_headers/bdev_module.o 00:03:23.764 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:23.764 LINK mem_callbacks 00:03:24.021 LINK hello_fsdev 00:03:24.021 LINK hello_blob 00:03:24.021 CXX test/cpp_headers/bdev_zone.o 00:03:24.021 CC test/event/reactor_perf/reactor_perf.o 00:03:24.021 CC test/env/vtophys/vtophys.o 00:03:24.021 CC examples/nvme/reconnect/reconnect.o 00:03:24.021 CC test/nvme/sgl/sgl.o 00:03:24.279 LINK reactor_perf 00:03:24.279 LINK accel_perf 00:03:24.279 CXX test/cpp_headers/bit_array.o 00:03:24.279 LINK vtophys 00:03:24.279 CC test/nvme/e2edp/nvme_dp.o 00:03:24.279 LINK nvme_fuzz 00:03:24.280 CC examples/blob/cli/blobcli.o 00:03:24.538 LINK sgl 00:03:24.538 CXX test/cpp_headers/bit_pool.o 00:03:24.538 LINK reconnect 00:03:24.538 CC test/nvme/overhead/overhead.o 00:03:24.538 CC test/event/app_repeat/app_repeat.o 00:03:24.538 CXX test/cpp_headers/blob_bdev.o 00:03:24.538 LINK nvme_dp 00:03:24.796 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.796 CC test/nvme/err_injection/err_injection.o 00:03:24.796 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.796 LINK app_repeat 00:03:24.796 LINK env_dpdk_post_init 00:03:24.796 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.054 LINK err_injection 00:03:25.054 LINK overhead 00:03:25.054 CC examples/nvme/arbitration/arbitration.o 00:03:25.054 CXX test/cpp_headers/blobfs.o 00:03:25.054 LINK blobcli 00:03:25.054 CC examples/bdev/hello_world/hello_bdev.o 00:03:25.312 CXX test/cpp_headers/blob.o 00:03:25.312 CC test/env/memory/memory_ut.o 00:03:25.312 CC test/event/scheduler/scheduler.o 00:03:25.312 CC test/env/pci/pci_ut.o 00:03:25.312 CC test/nvme/startup/startup.o 00:03:25.312 CXX test/cpp_headers/conf.o 00:03:25.312 LINK arbitration 00:03:25.312 LINK hello_bdev 00:03:25.570 CC examples/bdev/bdevperf/bdevperf.o 00:03:25.570 CXX test/cpp_headers/config.o 00:03:25.570 LINK nvme_manage 00:03:25.570 CXX test/cpp_headers/cpuset.o 00:03:25.570 LINK startup 00:03:25.570 CXX test/cpp_headers/crc16.o 00:03:25.829 LINK scheduler 00:03:25.829 CXX test/cpp_headers/crc32.o 00:03:25.829 CXX test/cpp_headers/crc64.o 00:03:25.829 LINK pci_ut 00:03:25.829 CC test/nvme/reserve/reserve.o 00:03:25.829 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:25.829 CC examples/nvme/hotplug/hotplug.o 00:03:26.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.087 CXX test/cpp_headers/dif.o 00:03:26.087 CC test/nvme/simple_copy/simple_copy.o 00:03:26.087 LINK reserve 00:03:26.087 LINK iscsi_fuzz 00:03:26.087 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.087 CXX test/cpp_headers/dma.o 00:03:26.087 LINK hotplug 00:03:26.345 CC examples/nvme/abort/abort.o 00:03:26.345 CXX test/cpp_headers/endian.o 00:03:26.345 LINK cmb_copy 00:03:26.345 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.602 LINK simple_copy 00:03:26.602 CC test/rpc_client/rpc_client_test.o 00:03:26.602 LINK vhost_fuzz 00:03:26.602 CXX test/cpp_headers/env_dpdk.o 00:03:26.602 LINK memory_ut 00:03:26.602 LINK bdevperf 00:03:26.602 LINK pmr_persistence 00:03:26.602 CC test/accel/dif/dif.o 00:03:26.860 LINK rpc_client_test 00:03:26.860 CXX test/cpp_headers/env.o 00:03:26.860 LINK abort 00:03:26.860 CC test/app/histogram_perf/histogram_perf.o 00:03:26.860 CC test/blobfs/mkfs/mkfs.o 00:03:26.860 CC test/nvme/connect_stress/connect_stress.o 00:03:26.860 CXX test/cpp_headers/event.o 00:03:26.860 CC test/app/jsoncat/jsoncat.o 00:03:26.860 CC test/app/stub/stub.o 00:03:27.118 LINK histogram_perf 00:03:27.119 CC test/nvme/boot_partition/boot_partition.o 00:03:27.119 LINK connect_stress 00:03:27.119 LINK mkfs 00:03:27.119 LINK jsoncat 00:03:27.119 CXX test/cpp_headers/fd_group.o 00:03:27.119 CC test/lvol/esnap/esnap.o 00:03:27.119 LINK stub 00:03:27.119 LINK boot_partition 00:03:27.377 CC examples/nvmf/nvmf/nvmf.o 00:03:27.377 CXX test/cpp_headers/fd.o 00:03:27.377 CC test/nvme/compliance/nvme_compliance.o 00:03:27.377 CC test/nvme/fused_ordering/fused_ordering.o 00:03:27.377 CXX test/cpp_headers/file.o 00:03:27.377 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:27.377 CC test/nvme/fdp/fdp.o 00:03:27.634 CXX test/cpp_headers/fsdev.o 00:03:27.634 CC test/nvme/cuse/cuse.o 00:03:27.634 LINK doorbell_aers 00:03:27.635 LINK dif 00:03:27.635 LINK fused_ordering 00:03:27.635 CXX test/cpp_headers/fsdev_module.o 00:03:27.635 CXX test/cpp_headers/ftl.o 00:03:27.893 LINK nvmf 00:03:27.893 CXX test/cpp_headers/gpt_spec.o 00:03:27.893 CXX test/cpp_headers/hexlify.o 00:03:27.893 LINK fdp 00:03:27.893 CXX test/cpp_headers/histogram_data.o 00:03:27.893 LINK nvme_compliance 00:03:27.893 CXX test/cpp_headers/idxd.o 00:03:28.152 CXX test/cpp_headers/idxd_spec.o 00:03:28.152 CXX test/cpp_headers/init.o 00:03:28.152 CXX test/cpp_headers/ioat.o 00:03:28.152 CXX test/cpp_headers/ioat_spec.o 00:03:28.152 CXX test/cpp_headers/iscsi_spec.o 00:03:28.152 CXX test/cpp_headers/json.o 00:03:28.152 CXX test/cpp_headers/jsonrpc.o 00:03:28.152 CC test/bdev/bdevio/bdevio.o 00:03:28.152 CXX test/cpp_headers/keyring.o 00:03:28.152 CXX test/cpp_headers/keyring_module.o 00:03:28.152 CXX test/cpp_headers/likely.o 00:03:28.152 CXX test/cpp_headers/log.o 00:03:28.152 CXX test/cpp_headers/lvol.o 00:03:28.429 CXX test/cpp_headers/md5.o 00:03:28.430 CXX test/cpp_headers/memory.o 00:03:28.430 CXX test/cpp_headers/mmio.o 00:03:28.430 CXX test/cpp_headers/nbd.o 00:03:28.430 CXX test/cpp_headers/net.o 00:03:28.430 CXX test/cpp_headers/notify.o 00:03:28.430 CXX test/cpp_headers/nvme.o 00:03:28.430 CXX test/cpp_headers/nvme_intel.o 00:03:28.430 CXX test/cpp_headers/nvme_ocssd.o 00:03:28.430 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:28.430 CXX test/cpp_headers/nvme_spec.o 00:03:28.700 CXX test/cpp_headers/nvme_zns.o 00:03:28.701 CXX test/cpp_headers/nvmf_cmd.o 00:03:28.701 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:28.701 CXX test/cpp_headers/nvmf.o 00:03:28.701 LINK bdevio 00:03:28.701 CXX test/cpp_headers/nvmf_spec.o 00:03:28.701 CXX test/cpp_headers/nvmf_transport.o 00:03:28.701 CXX test/cpp_headers/opal.o 00:03:28.701 CXX test/cpp_headers/opal_spec.o 00:03:28.701 CXX test/cpp_headers/pci_ids.o 00:03:28.701 CXX test/cpp_headers/pipe.o 00:03:28.701 CXX test/cpp_headers/queue.o 00:03:28.960 CXX test/cpp_headers/reduce.o 00:03:28.960 CXX test/cpp_headers/rpc.o 00:03:28.960 CXX test/cpp_headers/scheduler.o 00:03:28.960 CXX test/cpp_headers/scsi.o 00:03:28.960 CXX test/cpp_headers/scsi_spec.o 00:03:28.960 CXX test/cpp_headers/sock.o 00:03:28.960 CXX test/cpp_headers/stdinc.o 00:03:28.960 CXX test/cpp_headers/string.o 00:03:28.960 CXX test/cpp_headers/thread.o 00:03:28.960 CXX test/cpp_headers/trace.o 00:03:28.960 CXX test/cpp_headers/trace_parser.o 00:03:29.218 CXX test/cpp_headers/tree.o 00:03:29.218 CXX test/cpp_headers/ublk.o 00:03:29.218 CXX test/cpp_headers/util.o 00:03:29.218 CXX test/cpp_headers/uuid.o 00:03:29.218 CXX test/cpp_headers/version.o 00:03:29.218 CXX test/cpp_headers/vfio_user_pci.o 00:03:29.218 LINK cuse 00:03:29.218 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.218 CXX test/cpp_headers/vhost.o 00:03:29.218 CXX test/cpp_headers/vmd.o 00:03:29.218 CXX test/cpp_headers/xor.o 00:03:29.218 CXX test/cpp_headers/zipf.o 00:03:34.492 LINK esnap 00:03:34.492 00:03:34.492 real 1m34.568s 00:03:34.492 user 7m56.343s 00:03:34.492 sys 1m43.307s 00:03:34.492 16:00:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:34.492 ************************************ 00:03:34.492 END TEST make 00:03:34.492 ************************************ 00:03:34.492 16:00:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:34.492 16:00:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:34.492 16:00:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:34.492 16:00:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:34.492 16:00:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.492 16:00:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:34.492 16:00:00 -- pm/common@44 -- $ pid=5466 00:03:34.492 16:00:00 -- pm/common@50 -- $ kill -TERM 5466 00:03:34.492 16:00:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.492 16:00:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:34.492 16:00:00 -- pm/common@44 -- $ pid=5468 00:03:34.492 16:00:00 -- pm/common@50 -- $ kill -TERM 5468 00:03:34.492 16:00:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:34.492 16:00:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:34.492 16:00:00 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:34.492 16:00:00 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:34.492 16:00:00 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:34.492 16:00:00 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:34.492 16:00:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.492 16:00:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.492 16:00:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.492 16:00:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.492 16:00:00 -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.492 16:00:00 -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.493 16:00:00 -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.493 16:00:00 -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.493 16:00:00 -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.493 16:00:00 -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.493 16:00:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.493 16:00:00 -- scripts/common.sh@344 -- # case "$op" in 00:03:34.493 16:00:00 -- scripts/common.sh@345 -- # : 1 00:03:34.493 16:00:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.493 16:00:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.493 16:00:00 -- scripts/common.sh@365 -- # decimal 1 00:03:34.493 16:00:00 -- scripts/common.sh@353 -- # local d=1 00:03:34.493 16:00:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.493 16:00:00 -- scripts/common.sh@355 -- # echo 1 00:03:34.493 16:00:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.493 16:00:00 -- scripts/common.sh@366 -- # decimal 2 00:03:34.493 16:00:00 -- scripts/common.sh@353 -- # local d=2 00:03:34.493 16:00:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.493 16:00:00 -- scripts/common.sh@355 -- # echo 2 00:03:34.493 16:00:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.493 16:00:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.493 16:00:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.493 16:00:00 -- scripts/common.sh@368 -- # return 0 00:03:34.493 16:00:00 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.493 16:00:00 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.493 --rc genhtml_branch_coverage=1 00:03:34.493 --rc genhtml_function_coverage=1 00:03:34.493 --rc genhtml_legend=1 00:03:34.493 --rc geninfo_all_blocks=1 00:03:34.493 --rc geninfo_unexecuted_blocks=1 00:03:34.493 00:03:34.493 ' 00:03:34.493 16:00:00 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.493 --rc genhtml_branch_coverage=1 00:03:34.493 --rc genhtml_function_coverage=1 00:03:34.493 --rc genhtml_legend=1 00:03:34.493 --rc geninfo_all_blocks=1 00:03:34.493 --rc geninfo_unexecuted_blocks=1 00:03:34.493 00:03:34.493 ' 00:03:34.493 16:00:00 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.493 --rc genhtml_branch_coverage=1 00:03:34.493 --rc genhtml_function_coverage=1 00:03:34.493 --rc genhtml_legend=1 00:03:34.493 --rc geninfo_all_blocks=1 00:03:34.493 --rc geninfo_unexecuted_blocks=1 00:03:34.493 00:03:34.493 ' 00:03:34.493 16:00:00 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.493 --rc genhtml_branch_coverage=1 00:03:34.493 --rc genhtml_function_coverage=1 00:03:34.493 --rc genhtml_legend=1 00:03:34.493 --rc geninfo_all_blocks=1 00:03:34.493 --rc geninfo_unexecuted_blocks=1 00:03:34.493 00:03:34.493 ' 00:03:34.493 16:00:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:34.493 16:00:00 -- nvmf/common.sh@7 -- # uname -s 00:03:34.493 16:00:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.493 16:00:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.493 16:00:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.493 16:00:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.493 16:00:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:34.493 16:00:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:34.493 16:00:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.493 16:00:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:34.493 16:00:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.493 16:00:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:34.493 16:00:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c218c69e-bbef-4c86-a86c-3bd5562bb564 00:03:34.493 16:00:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=c218c69e-bbef-4c86-a86c-3bd5562bb564 00:03:34.493 16:00:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.493 16:00:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:34.493 16:00:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:34.493 16:00:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.493 16:00:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:34.493 16:00:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:34.493 16:00:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.493 16:00:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.493 16:00:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.493 16:00:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.493 16:00:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.493 16:00:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.493 16:00:00 -- paths/export.sh@5 -- # export PATH 00:03:34.493 16:00:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.493 16:00:00 -- nvmf/common.sh@51 -- # : 0 00:03:34.493 16:00:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:34.493 16:00:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:34.493 16:00:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:34.493 16:00:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.493 16:00:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.493 16:00:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:34.493 16:00:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:34.493 16:00:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:34.493 16:00:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:34.493 16:00:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.493 16:00:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.493 16:00:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.493 16:00:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.493 16:00:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.493 16:00:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.493 16:00:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.493 16:00:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.493 16:00:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.493 16:00:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.493 16:00:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.493 16:00:00 -- spdk/autotest.sh@48 -- # udevadm_pid=56330 00:03:34.493 16:00:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.493 16:00:00 -- pm/common@17 -- # local monitor 00:03:34.493 16:00:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.493 16:00:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.493 16:00:00 -- pm/common@25 -- # sleep 1 00:03:34.493 16:00:00 -- pm/common@21 -- # date +%s 00:03:34.493 16:00:00 -- pm/common@21 -- # date +%s 00:03:34.493 16:00:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734019200 00:03:34.493 16:00:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734019200 00:03:34.753 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734019200_collect-vmstat.pm.log 00:03:34.753 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734019200_collect-cpu-load.pm.log 00:03:35.693 16:00:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.693 16:00:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.693 16:00:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.693 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:03:35.693 16:00:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.693 16:00:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:35.693 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:03:35.693 16:00:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:35.693 16:00:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:35.693 16:00:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:35.693 16:00:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:35.693 16:00:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:35.693 16:00:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.693 16:00:01 -- common/autotest_common.sh@1457 -- # uname 00:03:35.693 16:00:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:35.693 16:00:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.693 16:00:01 -- common/autotest_common.sh@1477 -- # uname 00:03:35.693 16:00:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:35.693 16:00:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:35.693 16:00:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.693 lcov: LCOV version 1.15 00:03:35.693 16:00:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:53.803 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.803 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.684 16:00:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:08.684 16:00:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.684 16:00:33 -- common/autotest_common.sh@10 -- # set +x 00:04:08.684 16:00:33 -- spdk/autotest.sh@78 -- # rm -f 00:04:08.684 16:00:33 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.684 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:08.684 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:08.684 16:00:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:08.684 16:00:34 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:08.684 16:00:34 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:08.684 16:00:34 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:08.684 16:00:34 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:08.684 16:00:34 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:08.684 16:00:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:08.684 16:00:34 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:08.684 16:00:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.684 16:00:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:08.684 16:00:34 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:08.684 16:00:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.684 16:00:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.684 16:00:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:08.684 16:00:34 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:08.684 16:00:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.684 16:00:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:08.684 16:00:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:08.684 16:00:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.684 16:00:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.684 16:00:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.684 16:00:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:08.684 16:00:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:08.684 16:00:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:08.684 16:00:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.684 16:00:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.684 16:00:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:08.684 16:00:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:08.684 16:00:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:08.684 16:00:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.684 16:00:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:08.684 16:00:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.684 16:00:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.684 16:00:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:08.684 16:00:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:08.684 16:00:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.684 No valid GPT data, bailing 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # pt= 00:04:08.684 16:00:34 -- scripts/common.sh@395 -- # return 1 00:04:08.684 16:00:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.684 1+0 records in 00:04:08.684 1+0 records out 00:04:08.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394326 s, 266 MB/s 00:04:08.684 16:00:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.684 16:00:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.684 16:00:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:08.684 16:00:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:08.684 16:00:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:08.684 No valid GPT data, bailing 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # pt= 00:04:08.684 16:00:34 -- scripts/common.sh@395 -- # return 1 00:04:08.684 16:00:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:08.684 1+0 records in 00:04:08.684 1+0 records out 00:04:08.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399274 s, 263 MB/s 00:04:08.684 16:00:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.684 16:00:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.684 16:00:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:08.684 16:00:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:08.684 16:00:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:08.684 No valid GPT data, bailing 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # pt= 00:04:08.684 16:00:34 -- scripts/common.sh@395 -- # return 1 00:04:08.684 16:00:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:08.684 1+0 records in 00:04:08.684 1+0 records out 00:04:08.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601088 s, 174 MB/s 00:04:08.684 16:00:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.684 16:00:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.684 16:00:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:08.684 16:00:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:08.684 16:00:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:08.684 No valid GPT data, bailing 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:08.684 16:00:34 -- scripts/common.sh@394 -- # pt= 00:04:08.684 16:00:34 -- scripts/common.sh@395 -- # return 1 00:04:08.684 16:00:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:08.684 1+0 records in 00:04:08.684 1+0 records out 00:04:08.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626789 s, 167 MB/s 00:04:08.684 16:00:34 -- spdk/autotest.sh@105 -- # sync 00:04:08.684 16:00:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.684 16:00:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.684 16:00:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.216 16:00:37 -- spdk/autotest.sh@111 -- # uname -s 00:04:11.217 16:00:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:11.217 16:00:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:11.217 16:00:37 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.154 Hugepages 00:04:12.154 node hugesize free / total 00:04:12.154 node0 1048576kB 0 / 0 00:04:12.154 node0 2048kB 0 / 0 00:04:12.154 00:04:12.154 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.154 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:12.154 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:12.413 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:12.413 16:00:38 -- spdk/autotest.sh@117 -- # uname -s 00:04:12.413 16:00:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:12.413 16:00:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:12.413 16:00:38 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.239 16:00:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:14.616 16:00:40 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:14.616 16:00:40 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:14.616 16:00:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.616 16:00:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:14.616 16:00:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:14.616 16:00:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:14.616 16:00:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.616 16:00:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:14.616 16:00:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:14.616 16:00:40 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:14.616 16:00:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:14.616 16:00:40 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.874 Waiting for block devices as requested 00:04:15.132 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.132 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.132 16:00:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:15.132 16:00:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:15.132 16:00:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:15.132 16:00:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:15.132 16:00:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:15.132 16:00:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:15.132 16:00:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:15.132 16:00:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:15.132 16:00:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:15.132 16:00:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:15.132 16:00:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:15.132 16:00:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:15.132 16:00:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:15.132 16:00:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:15.132 16:00:41 -- common/autotest_common.sh@1543 -- # continue 00:04:15.132 16:00:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:15.132 16:00:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:15.132 16:00:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:15.132 16:00:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:15.391 16:00:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:15.391 16:00:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:15.391 16:00:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:15.391 16:00:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:15.392 16:00:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:15.392 16:00:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:15.392 16:00:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:15.392 16:00:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:15.392 16:00:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:15.392 16:00:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:15.392 16:00:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:15.392 16:00:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:15.392 16:00:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:15.392 16:00:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:15.392 16:00:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:15.392 16:00:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:15.392 16:00:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:15.392 16:00:41 -- common/autotest_common.sh@1543 -- # continue 00:04:15.392 16:00:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:15.392 16:00:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.392 16:00:41 -- common/autotest_common.sh@10 -- # set +x 00:04:15.392 16:00:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:15.392 16:00:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.392 16:00:41 -- common/autotest_common.sh@10 -- # set +x 00:04:15.392 16:00:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.332 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.332 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.332 16:00:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:16.332 16:00:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.332 16:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.332 16:00:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:16.332 16:00:42 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:16.332 16:00:42 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:16.332 16:00:42 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:16.332 16:00:42 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:16.332 16:00:42 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:16.332 16:00:42 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:16.592 16:00:42 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:16.592 16:00:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:16.592 16:00:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:16.592 16:00:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.592 16:00:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:16.592 16:00:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:16.592 16:00:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:16.592 16:00:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:16.592 16:00:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:16.592 16:00:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:16.592 16:00:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:16.592 16:00:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:16.592 16:00:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:16.592 16:00:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:16.592 16:00:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:16.592 16:00:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:16.592 16:00:42 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:16.592 16:00:42 -- common/autotest_common.sh@1572 -- # return 0 00:04:16.592 16:00:42 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:16.592 16:00:42 -- common/autotest_common.sh@1580 -- # return 0 00:04:16.592 16:00:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:16.592 16:00:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:16.592 16:00:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.592 16:00:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.592 16:00:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:16.592 16:00:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.592 16:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.592 16:00:42 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:16.592 16:00:42 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:16.592 16:00:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.592 16:00:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.592 16:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.592 ************************************ 00:04:16.592 START TEST env 00:04:16.592 ************************************ 00:04:16.592 16:00:42 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:16.592 * Looking for test storage... 00:04:16.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:16.852 16:00:42 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.852 16:00:42 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.852 16:00:42 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.852 16:00:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.852 16:00:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.852 16:00:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.852 16:00:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.852 16:00:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.852 16:00:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.852 16:00:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.852 16:00:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.852 16:00:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.852 16:00:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.852 16:00:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.852 16:00:43 env -- scripts/common.sh@344 -- # case "$op" in 00:04:16.852 16:00:43 env -- scripts/common.sh@345 -- # : 1 00:04:16.852 16:00:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.852 16:00:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.852 16:00:43 env -- scripts/common.sh@365 -- # decimal 1 00:04:16.852 16:00:43 env -- scripts/common.sh@353 -- # local d=1 00:04:16.852 16:00:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.852 16:00:43 env -- scripts/common.sh@355 -- # echo 1 00:04:16.852 16:00:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.852 16:00:43 env -- scripts/common.sh@366 -- # decimal 2 00:04:16.852 16:00:43 env -- scripts/common.sh@353 -- # local d=2 00:04:16.852 16:00:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.852 16:00:43 env -- scripts/common.sh@355 -- # echo 2 00:04:16.852 16:00:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.852 16:00:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.852 16:00:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.852 16:00:43 env -- scripts/common.sh@368 -- # return 0 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.852 --rc genhtml_branch_coverage=1 00:04:16.852 --rc genhtml_function_coverage=1 00:04:16.852 --rc genhtml_legend=1 00:04:16.852 --rc geninfo_all_blocks=1 00:04:16.852 --rc geninfo_unexecuted_blocks=1 00:04:16.852 00:04:16.852 ' 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.852 --rc genhtml_branch_coverage=1 00:04:16.852 --rc genhtml_function_coverage=1 00:04:16.852 --rc genhtml_legend=1 00:04:16.852 --rc geninfo_all_blocks=1 00:04:16.852 --rc geninfo_unexecuted_blocks=1 00:04:16.852 00:04:16.852 ' 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.852 --rc genhtml_branch_coverage=1 00:04:16.852 --rc genhtml_function_coverage=1 00:04:16.852 --rc genhtml_legend=1 00:04:16.852 --rc geninfo_all_blocks=1 00:04:16.852 --rc geninfo_unexecuted_blocks=1 00:04:16.852 00:04:16.852 ' 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.852 --rc genhtml_branch_coverage=1 00:04:16.852 --rc genhtml_function_coverage=1 00:04:16.852 --rc genhtml_legend=1 00:04:16.852 --rc geninfo_all_blocks=1 00:04:16.852 --rc geninfo_unexecuted_blocks=1 00:04:16.852 00:04:16.852 ' 00:04:16.852 16:00:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.852 16:00:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.852 16:00:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.852 ************************************ 00:04:16.852 START TEST env_memory 00:04:16.852 ************************************ 00:04:16.852 16:00:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:16.852 00:04:16.852 00:04:16.852 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.852 http://cunit.sourceforge.net/ 00:04:16.852 00:04:16.852 00:04:16.852 Suite: memory 00:04:16.852 Test: alloc and free memory map ...[2024-12-12 16:00:43.133499] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:16.852 passed 00:04:16.852 Test: mem map translation ...[2024-12-12 16:00:43.178051] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:16.852 [2024-12-12 16:00:43.178116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:16.852 [2024-12-12 16:00:43.178197] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:16.852 [2024-12-12 16:00:43.178225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:17.111 passed 00:04:17.111 Test: mem map registration ...[2024-12-12 16:00:43.246767] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:17.111 [2024-12-12 16:00:43.246836] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:17.111 passed 00:04:17.111 Test: mem map adjacent registrations ...passed 00:04:17.111 00:04:17.111 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.111 suites 1 1 n/a 0 0 00:04:17.111 tests 4 4 4 0 0 00:04:17.111 asserts 152 152 152 0 n/a 00:04:17.111 00:04:17.111 Elapsed time = 0.246 seconds 00:04:17.111 00:04:17.111 real 0m0.300s 00:04:17.111 user 0m0.252s 00:04:17.111 sys 0m0.037s 00:04:17.111 16:00:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.111 ************************************ 00:04:17.111 END TEST env_memory 00:04:17.111 ************************************ 00:04:17.112 16:00:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:17.112 16:00:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:17.112 16:00:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.112 16:00:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.112 16:00:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.112 ************************************ 00:04:17.112 START TEST env_vtophys 00:04:17.112 ************************************ 00:04:17.112 16:00:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:17.371 EAL: lib.eal log level changed from notice to debug 00:04:17.371 EAL: Detected lcore 0 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 1 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 2 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 3 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 4 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 5 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 6 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 7 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 8 as core 0 on socket 0 00:04:17.371 EAL: Detected lcore 9 as core 0 on socket 0 00:04:17.371 EAL: Maximum logical cores by configuration: 128 00:04:17.371 EAL: Detected CPU lcores: 10 00:04:17.371 EAL: Detected NUMA nodes: 1 00:04:17.371 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:17.371 EAL: Detected shared linkage of DPDK 00:04:17.371 EAL: No shared files mode enabled, IPC will be disabled 00:04:17.371 EAL: Selected IOVA mode 'PA' 00:04:17.371 EAL: Probing VFIO support... 00:04:17.371 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:17.371 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:17.371 EAL: Ask a virtual area of 0x2e000 bytes 00:04:17.371 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:17.371 EAL: Setting up physically contiguous memory... 00:04:17.371 EAL: Setting maximum number of open files to 524288 00:04:17.371 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:17.371 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:17.371 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.371 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:17.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.371 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.371 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:17.371 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:17.371 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.371 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:17.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.371 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.371 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:17.371 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:17.371 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.371 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:17.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.371 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.371 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:17.371 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:17.371 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.371 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:17.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.371 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.371 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:17.371 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:17.371 EAL: Hugepages will be freed exactly as allocated. 00:04:17.371 EAL: No shared files mode enabled, IPC is disabled 00:04:17.371 EAL: No shared files mode enabled, IPC is disabled 00:04:17.371 EAL: TSC frequency is ~2290000 KHz 00:04:17.371 EAL: Main lcore 0 is ready (tid=7f8ed4019a40;cpuset=[0]) 00:04:17.371 EAL: Trying to obtain current memory policy. 00:04:17.371 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.371 EAL: Restoring previous memory policy: 0 00:04:17.371 EAL: request: mp_malloc_sync 00:04:17.371 EAL: No shared files mode enabled, IPC is disabled 00:04:17.371 EAL: Heap on socket 0 was expanded by 2MB 00:04:17.372 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:17.372 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:17.372 EAL: Mem event callback 'spdk:(nil)' registered 00:04:17.372 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:17.372 00:04:17.372 00:04:17.372 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.372 http://cunit.sourceforge.net/ 00:04:17.372 00:04:17.372 00:04:17.372 Suite: components_suite 00:04:17.942 Test: vtophys_malloc_test ...passed 00:04:17.942 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:17.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.942 EAL: Restoring previous memory policy: 4 00:04:17.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.942 EAL: request: mp_malloc_sync 00:04:17.942 EAL: No shared files mode enabled, IPC is disabled 00:04:17.942 EAL: Heap on socket 0 was expanded by 4MB 00:04:17.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.942 EAL: request: mp_malloc_sync 00:04:17.942 EAL: No shared files mode enabled, IPC is disabled 00:04:17.942 EAL: Heap on socket 0 was shrunk by 4MB 00:04:17.942 EAL: Trying to obtain current memory policy. 00:04:17.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.942 EAL: Restoring previous memory policy: 4 00:04:17.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.942 EAL: request: mp_malloc_sync 00:04:17.942 EAL: No shared files mode enabled, IPC is disabled 00:04:17.942 EAL: Heap on socket 0 was expanded by 6MB 00:04:17.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.942 EAL: request: mp_malloc_sync 00:04:17.942 EAL: No shared files mode enabled, IPC is disabled 00:04:17.942 EAL: Heap on socket 0 was shrunk by 6MB 00:04:17.942 EAL: Trying to obtain current memory policy. 00:04:17.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.942 EAL: Restoring previous memory policy: 4 00:04:17.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.942 EAL: request: mp_malloc_sync 00:04:17.942 EAL: No shared files mode enabled, IPC is disabled 00:04:17.942 EAL: Heap on socket 0 was expanded by 10MB 00:04:17.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.942 EAL: request: mp_malloc_sync 00:04:17.942 EAL: No shared files mode enabled, IPC is disabled 00:04:17.942 EAL: Heap on socket 0 was shrunk by 10MB 00:04:17.942 EAL: Trying to obtain current memory policy. 00:04:17.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.942 EAL: Restoring previous memory policy: 4 00:04:17.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.942 EAL: request: mp_malloc_sync 00:04:17.942 EAL: No shared files mode enabled, IPC is disabled 00:04:17.942 EAL: Heap on socket 0 was expanded by 18MB 00:04:18.202 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.202 EAL: request: mp_malloc_sync 00:04:18.202 EAL: No shared files mode enabled, IPC is disabled 00:04:18.202 EAL: Heap on socket 0 was shrunk by 18MB 00:04:18.202 EAL: Trying to obtain current memory policy. 00:04:18.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.202 EAL: Restoring previous memory policy: 4 00:04:18.202 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.202 EAL: request: mp_malloc_sync 00:04:18.202 EAL: No shared files mode enabled, IPC is disabled 00:04:18.202 EAL: Heap on socket 0 was expanded by 34MB 00:04:18.202 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.202 EAL: request: mp_malloc_sync 00:04:18.202 EAL: No shared files mode enabled, IPC is disabled 00:04:18.202 EAL: Heap on socket 0 was shrunk by 34MB 00:04:18.202 EAL: Trying to obtain current memory policy. 00:04:18.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.203 EAL: Restoring previous memory policy: 4 00:04:18.203 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.203 EAL: request: mp_malloc_sync 00:04:18.203 EAL: No shared files mode enabled, IPC is disabled 00:04:18.203 EAL: Heap on socket 0 was expanded by 66MB 00:04:18.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.462 EAL: request: mp_malloc_sync 00:04:18.462 EAL: No shared files mode enabled, IPC is disabled 00:04:18.462 EAL: Heap on socket 0 was shrunk by 66MB 00:04:18.463 EAL: Trying to obtain current memory policy. 00:04:18.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.722 EAL: Restoring previous memory policy: 4 00:04:18.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.722 EAL: request: mp_malloc_sync 00:04:18.722 EAL: No shared files mode enabled, IPC is disabled 00:04:18.722 EAL: Heap on socket 0 was expanded by 130MB 00:04:18.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.981 EAL: request: mp_malloc_sync 00:04:18.981 EAL: No shared files mode enabled, IPC is disabled 00:04:18.981 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.239 EAL: Trying to obtain current memory policy. 00:04:19.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.239 EAL: Restoring previous memory policy: 4 00:04:19.239 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.239 EAL: request: mp_malloc_sync 00:04:19.239 EAL: No shared files mode enabled, IPC is disabled 00:04:19.239 EAL: Heap on socket 0 was expanded by 258MB 00:04:19.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.809 EAL: request: mp_malloc_sync 00:04:19.809 EAL: No shared files mode enabled, IPC is disabled 00:04:19.809 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.379 EAL: Trying to obtain current memory policy. 00:04:20.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.379 EAL: Restoring previous memory policy: 4 00:04:20.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.379 EAL: request: mp_malloc_sync 00:04:20.379 EAL: No shared files mode enabled, IPC is disabled 00:04:20.379 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.578 EAL: request: mp_malloc_sync 00:04:21.578 EAL: No shared files mode enabled, IPC is disabled 00:04:21.578 EAL: Heap on socket 0 was shrunk by 514MB 00:04:22.517 EAL: Trying to obtain current memory policy. 00:04:22.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.778 EAL: Restoring previous memory policy: 4 00:04:22.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.778 EAL: request: mp_malloc_sync 00:04:22.778 EAL: No shared files mode enabled, IPC is disabled 00:04:22.778 EAL: Heap on socket 0 was expanded by 1026MB 00:04:25.361 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.361 EAL: request: mp_malloc_sync 00:04:25.361 EAL: No shared files mode enabled, IPC is disabled 00:04:25.361 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:27.274 passed 00:04:27.274 00:04:27.274 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.274 suites 1 1 n/a 0 0 00:04:27.274 tests 2 2 2 0 0 00:04:27.274 asserts 5663 5663 5663 0 n/a 00:04:27.274 00:04:27.274 Elapsed time = 9.522 seconds 00:04:27.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.274 EAL: request: mp_malloc_sync 00:04:27.274 EAL: No shared files mode enabled, IPC is disabled 00:04:27.274 EAL: Heap on socket 0 was shrunk by 2MB 00:04:27.274 EAL: No shared files mode enabled, IPC is disabled 00:04:27.274 EAL: No shared files mode enabled, IPC is disabled 00:04:27.274 EAL: No shared files mode enabled, IPC is disabled 00:04:27.274 00:04:27.274 real 0m9.856s 00:04:27.274 user 0m8.297s 00:04:27.274 sys 0m1.392s 00:04:27.274 16:00:53 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.274 16:00:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:27.274 ************************************ 00:04:27.274 END TEST env_vtophys 00:04:27.274 ************************************ 00:04:27.274 16:00:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:27.274 16:00:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.274 16:00:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.274 16:00:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.274 ************************************ 00:04:27.274 START TEST env_pci 00:04:27.274 ************************************ 00:04:27.274 16:00:53 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:27.274 00:04:27.274 00:04:27.274 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.274 http://cunit.sourceforge.net/ 00:04:27.274 00:04:27.274 00:04:27.274 Suite: pci 00:04:27.274 Test: pci_hook ...[2024-12-12 16:00:53.393241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58677 has claimed it 00:04:27.274 passed 00:04:27.274 00:04:27.274 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.274 suites 1 1 n/a 0 0 00:04:27.274 tests 1 1 1 0 0 00:04:27.274 asserts 25 25 25 0 n/a 00:04:27.274 00:04:27.274 Elapsed time = 0.006 seconds 00:04:27.274 EAL: Cannot find device (10000:00:01.0) 00:04:27.274 EAL: Failed to attach device on primary process 00:04:27.274 00:04:27.274 real 0m0.108s 00:04:27.274 user 0m0.062s 00:04:27.274 sys 0m0.045s 00:04:27.274 16:00:53 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.274 16:00:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:27.274 ************************************ 00:04:27.274 END TEST env_pci 00:04:27.274 ************************************ 00:04:27.274 16:00:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:27.274 16:00:53 env -- env/env.sh@15 -- # uname 00:04:27.274 16:00:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:27.274 16:00:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:27.274 16:00:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.274 16:00:53 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:27.274 16:00:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.274 16:00:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.274 ************************************ 00:04:27.274 START TEST env_dpdk_post_init 00:04:27.274 ************************************ 00:04:27.274 16:00:53 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.274 EAL: Detected CPU lcores: 10 00:04:27.274 EAL: Detected NUMA nodes: 1 00:04:27.274 EAL: Detected shared linkage of DPDK 00:04:27.274 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.274 EAL: Selected IOVA mode 'PA' 00:04:27.534 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.534 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:27.534 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:27.534 Starting DPDK initialization... 00:04:27.534 Starting SPDK post initialization... 00:04:27.534 SPDK NVMe probe 00:04:27.534 Attaching to 0000:00:10.0 00:04:27.534 Attaching to 0000:00:11.0 00:04:27.534 Attached to 0000:00:10.0 00:04:27.534 Attached to 0000:00:11.0 00:04:27.534 Cleaning up... 00:04:27.534 00:04:27.534 real 0m0.290s 00:04:27.534 user 0m0.083s 00:04:27.534 sys 0m0.109s 00:04:27.534 16:00:53 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.534 16:00:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.534 ************************************ 00:04:27.534 END TEST env_dpdk_post_init 00:04:27.534 ************************************ 00:04:27.534 16:00:53 env -- env/env.sh@26 -- # uname 00:04:27.534 16:00:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.534 16:00:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.534 16:00:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.534 16:00:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.534 16:00:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.795 ************************************ 00:04:27.795 START TEST env_mem_callbacks 00:04:27.795 ************************************ 00:04:27.795 16:00:53 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.795 EAL: Detected CPU lcores: 10 00:04:27.795 EAL: Detected NUMA nodes: 1 00:04:27.795 EAL: Detected shared linkage of DPDK 00:04:27.795 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.795 EAL: Selected IOVA mode 'PA' 00:04:27.795 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.795 00:04:27.795 00:04:27.795 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.795 http://cunit.sourceforge.net/ 00:04:27.795 00:04:27.795 00:04:27.795 Suite: memory 00:04:27.796 Test: test ... 00:04:27.796 register 0x200000200000 2097152 00:04:27.796 malloc 3145728 00:04:27.796 register 0x200000400000 4194304 00:04:27.796 buf 0x2000004fffc0 len 3145728 PASSED 00:04:27.796 malloc 64 00:04:27.796 buf 0x2000004ffec0 len 64 PASSED 00:04:27.796 malloc 4194304 00:04:27.796 register 0x200000800000 6291456 00:04:27.796 buf 0x2000009fffc0 len 4194304 PASSED 00:04:27.796 free 0x2000004fffc0 3145728 00:04:27.796 free 0x2000004ffec0 64 00:04:27.796 unregister 0x200000400000 4194304 PASSED 00:04:27.796 free 0x2000009fffc0 4194304 00:04:27.796 unregister 0x200000800000 6291456 PASSED 00:04:27.796 malloc 8388608 00:04:27.796 register 0x200000400000 10485760 00:04:27.796 buf 0x2000005fffc0 len 8388608 PASSED 00:04:27.796 free 0x2000005fffc0 8388608 00:04:28.055 unregister 0x200000400000 10485760 PASSED 00:04:28.055 passed 00:04:28.055 00:04:28.056 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.056 suites 1 1 n/a 0 0 00:04:28.056 tests 1 1 1 0 0 00:04:28.056 asserts 15 15 15 0 n/a 00:04:28.056 00:04:28.056 Elapsed time = 0.095 seconds 00:04:28.056 00:04:28.056 real 0m0.295s 00:04:28.056 user 0m0.124s 00:04:28.056 sys 0m0.069s 00:04:28.056 16:00:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.056 16:00:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.056 ************************************ 00:04:28.056 END TEST env_mem_callbacks 00:04:28.056 ************************************ 00:04:28.056 00:04:28.056 real 0m11.428s 00:04:28.056 user 0m9.043s 00:04:28.056 sys 0m2.034s 00:04:28.056 16:00:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.056 16:00:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.056 ************************************ 00:04:28.056 END TEST env 00:04:28.056 ************************************ 00:04:28.056 16:00:54 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:28.056 16:00:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.056 16:00:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.056 16:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.056 ************************************ 00:04:28.056 START TEST rpc 00:04:28.056 ************************************ 00:04:28.056 16:00:54 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:28.316 * Looking for test storage... 00:04:28.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.316 16:00:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.316 16:00:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.316 16:00:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.316 16:00:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.316 16:00:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.316 16:00:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.316 16:00:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.316 16:00:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.316 16:00:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:28.316 16:00:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.316 16:00:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.316 16:00:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.316 16:00:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.316 16:00:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.316 16:00:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.316 16:00:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.316 16:00:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.316 16:00:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.316 16:00:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.316 16:00:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.316 16:00:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.316 --rc genhtml_branch_coverage=1 00:04:28.316 --rc genhtml_function_coverage=1 00:04:28.316 --rc genhtml_legend=1 00:04:28.316 --rc geninfo_all_blocks=1 00:04:28.316 --rc geninfo_unexecuted_blocks=1 00:04:28.316 00:04:28.316 ' 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.316 --rc genhtml_branch_coverage=1 00:04:28.316 --rc genhtml_function_coverage=1 00:04:28.316 --rc genhtml_legend=1 00:04:28.316 --rc geninfo_all_blocks=1 00:04:28.316 --rc geninfo_unexecuted_blocks=1 00:04:28.316 00:04:28.316 ' 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.316 --rc genhtml_branch_coverage=1 00:04:28.316 --rc genhtml_function_coverage=1 00:04:28.316 --rc genhtml_legend=1 00:04:28.316 --rc geninfo_all_blocks=1 00:04:28.316 --rc geninfo_unexecuted_blocks=1 00:04:28.316 00:04:28.316 ' 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.316 --rc genhtml_branch_coverage=1 00:04:28.316 --rc genhtml_function_coverage=1 00:04:28.316 --rc genhtml_legend=1 00:04:28.316 --rc geninfo_all_blocks=1 00:04:28.316 --rc geninfo_unexecuted_blocks=1 00:04:28.316 00:04:28.316 ' 00:04:28.316 16:00:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:28.316 16:00:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58804 00:04:28.316 16:00:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.316 16:00:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58804 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 58804 ']' 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.316 16:00:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.575 [2024-12-12 16:00:54.682286] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:28.575 [2024-12-12 16:00:54.682420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58804 ] 00:04:28.575 [2024-12-12 16:00:54.863256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.845 [2024-12-12 16:00:55.015545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:28.845 [2024-12-12 16:00:55.015624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58804' to capture a snapshot of events at runtime. 00:04:28.846 [2024-12-12 16:00:55.015653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:28.846 [2024-12-12 16:00:55.015665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:28.846 [2024-12-12 16:00:55.015674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58804 for offline analysis/debug. 00:04:28.846 [2024-12-12 16:00:55.017172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.785 16:00:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.785 16:00:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:29.785 16:00:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.785 16:00:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.785 16:00:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:29.785 16:00:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:29.785 16:00:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.785 16:00:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.785 16:00:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.044 ************************************ 00:04:30.044 START TEST rpc_integrity 00:04:30.044 ************************************ 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.044 { 00:04:30.044 "name": "Malloc0", 00:04:30.044 "aliases": [ 00:04:30.044 "d882fca6-9825-42de-bfea-b3138aff0f43" 00:04:30.044 ], 00:04:30.044 "product_name": "Malloc disk", 00:04:30.044 "block_size": 512, 00:04:30.044 "num_blocks": 16384, 00:04:30.044 "uuid": "d882fca6-9825-42de-bfea-b3138aff0f43", 00:04:30.044 "assigned_rate_limits": { 00:04:30.044 "rw_ios_per_sec": 0, 00:04:30.044 "rw_mbytes_per_sec": 0, 00:04:30.044 "r_mbytes_per_sec": 0, 00:04:30.044 "w_mbytes_per_sec": 0 00:04:30.044 }, 00:04:30.044 "claimed": false, 00:04:30.044 "zoned": false, 00:04:30.044 "supported_io_types": { 00:04:30.044 "read": true, 00:04:30.044 "write": true, 00:04:30.044 "unmap": true, 00:04:30.044 "flush": true, 00:04:30.044 "reset": true, 00:04:30.044 "nvme_admin": false, 00:04:30.044 "nvme_io": false, 00:04:30.044 "nvme_io_md": false, 00:04:30.044 "write_zeroes": true, 00:04:30.044 "zcopy": true, 00:04:30.044 "get_zone_info": false, 00:04:30.044 "zone_management": false, 00:04:30.044 "zone_append": false, 00:04:30.044 "compare": false, 00:04:30.044 "compare_and_write": false, 00:04:30.044 "abort": true, 00:04:30.044 "seek_hole": false, 00:04:30.044 "seek_data": false, 00:04:30.044 "copy": true, 00:04:30.044 "nvme_iov_md": false 00:04:30.044 }, 00:04:30.044 "memory_domains": [ 00:04:30.044 { 00:04:30.044 "dma_device_id": "system", 00:04:30.044 "dma_device_type": 1 00:04:30.044 }, 00:04:30.044 { 00:04:30.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.044 "dma_device_type": 2 00:04:30.044 } 00:04:30.044 ], 00:04:30.044 "driver_specific": {} 00:04:30.044 } 00:04:30.044 ]' 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.044 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.044 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.044 [2024-12-12 16:00:56.320255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.044 [2024-12-12 16:00:56.320354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.044 [2024-12-12 16:00:56.320408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:30.044 [2024-12-12 16:00:56.320429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.045 [2024-12-12 16:00:56.323498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.045 [2024-12-12 16:00:56.323554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.045 Passthru0 00:04:30.045 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.045 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.045 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.045 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.045 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.045 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.045 { 00:04:30.045 "name": "Malloc0", 00:04:30.045 "aliases": [ 00:04:30.045 "d882fca6-9825-42de-bfea-b3138aff0f43" 00:04:30.045 ], 00:04:30.045 "product_name": "Malloc disk", 00:04:30.045 "block_size": 512, 00:04:30.045 "num_blocks": 16384, 00:04:30.045 "uuid": "d882fca6-9825-42de-bfea-b3138aff0f43", 00:04:30.045 "assigned_rate_limits": { 00:04:30.045 "rw_ios_per_sec": 0, 00:04:30.045 "rw_mbytes_per_sec": 0, 00:04:30.045 "r_mbytes_per_sec": 0, 00:04:30.045 "w_mbytes_per_sec": 0 00:04:30.045 }, 00:04:30.045 "claimed": true, 00:04:30.045 "claim_type": "exclusive_write", 00:04:30.045 "zoned": false, 00:04:30.045 "supported_io_types": { 00:04:30.045 "read": true, 00:04:30.045 "write": true, 00:04:30.045 "unmap": true, 00:04:30.045 "flush": true, 00:04:30.045 "reset": true, 00:04:30.045 "nvme_admin": false, 00:04:30.045 "nvme_io": false, 00:04:30.045 "nvme_io_md": false, 00:04:30.045 "write_zeroes": true, 00:04:30.045 "zcopy": true, 00:04:30.045 "get_zone_info": false, 00:04:30.045 "zone_management": false, 00:04:30.045 "zone_append": false, 00:04:30.045 "compare": false, 00:04:30.045 "compare_and_write": false, 00:04:30.045 "abort": true, 00:04:30.045 "seek_hole": false, 00:04:30.045 "seek_data": false, 00:04:30.045 "copy": true, 00:04:30.045 "nvme_iov_md": false 00:04:30.045 }, 00:04:30.045 "memory_domains": [ 00:04:30.045 { 00:04:30.045 "dma_device_id": "system", 00:04:30.045 "dma_device_type": 1 00:04:30.045 }, 00:04:30.045 { 00:04:30.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.045 "dma_device_type": 2 00:04:30.045 } 00:04:30.045 ], 00:04:30.045 "driver_specific": {} 00:04:30.045 }, 00:04:30.045 { 00:04:30.045 "name": "Passthru0", 00:04:30.045 "aliases": [ 00:04:30.045 "2bde7763-35e9-5d3a-9880-571dbde55eff" 00:04:30.045 ], 00:04:30.045 "product_name": "passthru", 00:04:30.045 "block_size": 512, 00:04:30.045 "num_blocks": 16384, 00:04:30.045 "uuid": "2bde7763-35e9-5d3a-9880-571dbde55eff", 00:04:30.045 "assigned_rate_limits": { 00:04:30.045 "rw_ios_per_sec": 0, 00:04:30.045 "rw_mbytes_per_sec": 0, 00:04:30.045 "r_mbytes_per_sec": 0, 00:04:30.045 "w_mbytes_per_sec": 0 00:04:30.045 }, 00:04:30.045 "claimed": false, 00:04:30.045 "zoned": false, 00:04:30.045 "supported_io_types": { 00:04:30.045 "read": true, 00:04:30.045 "write": true, 00:04:30.045 "unmap": true, 00:04:30.045 "flush": true, 00:04:30.045 "reset": true, 00:04:30.045 "nvme_admin": false, 00:04:30.045 "nvme_io": false, 00:04:30.045 "nvme_io_md": false, 00:04:30.045 "write_zeroes": true, 00:04:30.045 "zcopy": true, 00:04:30.045 "get_zone_info": false, 00:04:30.045 "zone_management": false, 00:04:30.045 "zone_append": false, 00:04:30.045 "compare": false, 00:04:30.045 "compare_and_write": false, 00:04:30.045 "abort": true, 00:04:30.045 "seek_hole": false, 00:04:30.045 "seek_data": false, 00:04:30.045 "copy": true, 00:04:30.045 "nvme_iov_md": false 00:04:30.045 }, 00:04:30.045 "memory_domains": [ 00:04:30.045 { 00:04:30.045 "dma_device_id": "system", 00:04:30.045 "dma_device_type": 1 00:04:30.045 }, 00:04:30.045 { 00:04:30.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.045 "dma_device_type": 2 00:04:30.045 } 00:04:30.045 ], 00:04:30.045 "driver_specific": { 00:04:30.045 "passthru": { 00:04:30.045 "name": "Passthru0", 00:04:30.045 "base_bdev_name": "Malloc0" 00:04:30.045 } 00:04:30.045 } 00:04:30.045 } 00:04:30.045 ]' 00:04:30.045 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.045 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.045 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.045 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.045 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.304 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.304 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.304 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.304 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.304 16:00:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.304 00:04:30.304 real 0m0.367s 00:04:30.304 user 0m0.184s 00:04:30.304 sys 0m0.061s 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.304 16:00:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 ************************************ 00:04:30.304 END TEST rpc_integrity 00:04:30.304 ************************************ 00:04:30.304 16:00:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.304 16:00:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.304 16:00:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.304 16:00:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 ************************************ 00:04:30.304 START TEST rpc_plugins 00:04:30.304 ************************************ 00:04:30.304 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:30.304 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.304 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.304 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.304 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.304 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.304 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.304 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.304 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.304 { 00:04:30.304 "name": "Malloc1", 00:04:30.304 "aliases": [ 00:04:30.304 "b7ccc1fc-d5f0-4c5f-954b-e5d9fa01da68" 00:04:30.304 ], 00:04:30.304 "product_name": "Malloc disk", 00:04:30.304 "block_size": 4096, 00:04:30.304 "num_blocks": 256, 00:04:30.304 "uuid": "b7ccc1fc-d5f0-4c5f-954b-e5d9fa01da68", 00:04:30.304 "assigned_rate_limits": { 00:04:30.304 "rw_ios_per_sec": 0, 00:04:30.304 "rw_mbytes_per_sec": 0, 00:04:30.304 "r_mbytes_per_sec": 0, 00:04:30.304 "w_mbytes_per_sec": 0 00:04:30.304 }, 00:04:30.304 "claimed": false, 00:04:30.304 "zoned": false, 00:04:30.304 "supported_io_types": { 00:04:30.304 "read": true, 00:04:30.304 "write": true, 00:04:30.304 "unmap": true, 00:04:30.304 "flush": true, 00:04:30.304 "reset": true, 00:04:30.304 "nvme_admin": false, 00:04:30.304 "nvme_io": false, 00:04:30.304 "nvme_io_md": false, 00:04:30.304 "write_zeroes": true, 00:04:30.304 "zcopy": true, 00:04:30.304 "get_zone_info": false, 00:04:30.304 "zone_management": false, 00:04:30.304 "zone_append": false, 00:04:30.304 "compare": false, 00:04:30.304 "compare_and_write": false, 00:04:30.304 "abort": true, 00:04:30.304 "seek_hole": false, 00:04:30.304 "seek_data": false, 00:04:30.304 "copy": true, 00:04:30.304 "nvme_iov_md": false 00:04:30.304 }, 00:04:30.304 "memory_domains": [ 00:04:30.304 { 00:04:30.304 "dma_device_id": "system", 00:04:30.304 "dma_device_type": 1 00:04:30.304 }, 00:04:30.304 { 00:04:30.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.304 "dma_device_type": 2 00:04:30.304 } 00:04:30.304 ], 00:04:30.304 "driver_specific": {} 00:04:30.304 } 00:04:30.305 ]' 00:04:30.305 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:30.574 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:30.574 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.574 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.574 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:30.574 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:30.574 ************************************ 00:04:30.574 END TEST rpc_plugins 00:04:30.574 ************************************ 00:04:30.574 16:00:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:30.574 00:04:30.574 real 0m0.177s 00:04:30.574 user 0m0.106s 00:04:30.574 sys 0m0.026s 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.574 16:00:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.574 16:00:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.574 16:00:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.574 16:00:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.574 16:00:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.574 ************************************ 00:04:30.574 START TEST rpc_trace_cmd_test 00:04:30.574 ************************************ 00:04:30.574 16:00:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:30.574 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.574 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.574 16:00:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.574 16:00:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.574 16:00:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.574 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.574 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58804", 00:04:30.574 "tpoint_group_mask": "0x8", 00:04:30.574 "iscsi_conn": { 00:04:30.574 "mask": "0x2", 00:04:30.574 "tpoint_mask": "0x0" 00:04:30.574 }, 00:04:30.574 "scsi": { 00:04:30.574 "mask": "0x4", 00:04:30.574 "tpoint_mask": "0x0" 00:04:30.574 }, 00:04:30.574 "bdev": { 00:04:30.574 "mask": "0x8", 00:04:30.574 "tpoint_mask": "0xffffffffffffffff" 00:04:30.574 }, 00:04:30.574 "nvmf_rdma": { 00:04:30.574 "mask": "0x10", 00:04:30.574 "tpoint_mask": "0x0" 00:04:30.574 }, 00:04:30.574 "nvmf_tcp": { 00:04:30.574 "mask": "0x20", 00:04:30.574 "tpoint_mask": "0x0" 00:04:30.574 }, 00:04:30.574 "ftl": { 00:04:30.574 "mask": "0x40", 00:04:30.574 "tpoint_mask": "0x0" 00:04:30.574 }, 00:04:30.574 "blobfs": { 00:04:30.574 "mask": "0x80", 00:04:30.574 "tpoint_mask": "0x0" 00:04:30.574 }, 00:04:30.574 "dsa": { 00:04:30.574 "mask": "0x200", 00:04:30.574 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "thread": { 00:04:30.575 "mask": "0x400", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "nvme_pcie": { 00:04:30.575 "mask": "0x800", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "iaa": { 00:04:30.575 "mask": "0x1000", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "nvme_tcp": { 00:04:30.575 "mask": "0x2000", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "bdev_nvme": { 00:04:30.575 "mask": "0x4000", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "sock": { 00:04:30.575 "mask": "0x8000", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "blob": { 00:04:30.575 "mask": "0x10000", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "bdev_raid": { 00:04:30.575 "mask": "0x20000", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 }, 00:04:30.575 "scheduler": { 00:04:30.575 "mask": "0x40000", 00:04:30.575 "tpoint_mask": "0x0" 00:04:30.575 } 00:04:30.575 }' 00:04:30.575 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.575 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.575 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.835 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.835 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:30.835 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:30.835 16:00:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:30.835 16:00:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:30.835 16:00:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:30.835 ************************************ 00:04:30.835 END TEST rpc_trace_cmd_test 00:04:30.835 ************************************ 00:04:30.835 16:00:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:30.835 00:04:30.835 real 0m0.258s 00:04:30.835 user 0m0.207s 00:04:30.835 sys 0m0.041s 00:04:30.835 16:00:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.835 16:00:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.835 16:00:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:30.835 16:00:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:30.835 16:00:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:30.835 16:00:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.835 16:00:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.835 16:00:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.835 ************************************ 00:04:30.835 START TEST rpc_daemon_integrity 00:04:30.835 ************************************ 00:04:30.835 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.835 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.835 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.835 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.835 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.835 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.835 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.094 { 00:04:31.094 "name": "Malloc2", 00:04:31.094 "aliases": [ 00:04:31.094 "7279af83-4ea3-4319-b3d5-e26aa071ab22" 00:04:31.094 ], 00:04:31.094 "product_name": "Malloc disk", 00:04:31.094 "block_size": 512, 00:04:31.094 "num_blocks": 16384, 00:04:31.094 "uuid": "7279af83-4ea3-4319-b3d5-e26aa071ab22", 00:04:31.094 "assigned_rate_limits": { 00:04:31.094 "rw_ios_per_sec": 0, 00:04:31.094 "rw_mbytes_per_sec": 0, 00:04:31.094 "r_mbytes_per_sec": 0, 00:04:31.094 "w_mbytes_per_sec": 0 00:04:31.094 }, 00:04:31.094 "claimed": false, 00:04:31.094 "zoned": false, 00:04:31.094 "supported_io_types": { 00:04:31.094 "read": true, 00:04:31.094 "write": true, 00:04:31.094 "unmap": true, 00:04:31.094 "flush": true, 00:04:31.094 "reset": true, 00:04:31.094 "nvme_admin": false, 00:04:31.094 "nvme_io": false, 00:04:31.094 "nvme_io_md": false, 00:04:31.094 "write_zeroes": true, 00:04:31.094 "zcopy": true, 00:04:31.094 "get_zone_info": false, 00:04:31.094 "zone_management": false, 00:04:31.094 "zone_append": false, 00:04:31.094 "compare": false, 00:04:31.094 "compare_and_write": false, 00:04:31.094 "abort": true, 00:04:31.094 "seek_hole": false, 00:04:31.094 "seek_data": false, 00:04:31.094 "copy": true, 00:04:31.094 "nvme_iov_md": false 00:04:31.094 }, 00:04:31.094 "memory_domains": [ 00:04:31.094 { 00:04:31.094 "dma_device_id": "system", 00:04:31.094 "dma_device_type": 1 00:04:31.094 }, 00:04:31.094 { 00:04:31.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.094 "dma_device_type": 2 00:04:31.094 } 00:04:31.094 ], 00:04:31.094 "driver_specific": {} 00:04:31.094 } 00:04:31.094 ]' 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.094 [2024-12-12 16:00:57.327674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:31.094 [2024-12-12 16:00:57.327793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.094 [2024-12-12 16:00:57.327824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:31.094 [2024-12-12 16:00:57.327836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.094 [2024-12-12 16:00:57.330861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.094 [2024-12-12 16:00:57.330945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.094 Passthru0 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.094 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.095 { 00:04:31.095 "name": "Malloc2", 00:04:31.095 "aliases": [ 00:04:31.095 "7279af83-4ea3-4319-b3d5-e26aa071ab22" 00:04:31.095 ], 00:04:31.095 "product_name": "Malloc disk", 00:04:31.095 "block_size": 512, 00:04:31.095 "num_blocks": 16384, 00:04:31.095 "uuid": "7279af83-4ea3-4319-b3d5-e26aa071ab22", 00:04:31.095 "assigned_rate_limits": { 00:04:31.095 "rw_ios_per_sec": 0, 00:04:31.095 "rw_mbytes_per_sec": 0, 00:04:31.095 "r_mbytes_per_sec": 0, 00:04:31.095 "w_mbytes_per_sec": 0 00:04:31.095 }, 00:04:31.095 "claimed": true, 00:04:31.095 "claim_type": "exclusive_write", 00:04:31.095 "zoned": false, 00:04:31.095 "supported_io_types": { 00:04:31.095 "read": true, 00:04:31.095 "write": true, 00:04:31.095 "unmap": true, 00:04:31.095 "flush": true, 00:04:31.095 "reset": true, 00:04:31.095 "nvme_admin": false, 00:04:31.095 "nvme_io": false, 00:04:31.095 "nvme_io_md": false, 00:04:31.095 "write_zeroes": true, 00:04:31.095 "zcopy": true, 00:04:31.095 "get_zone_info": false, 00:04:31.095 "zone_management": false, 00:04:31.095 "zone_append": false, 00:04:31.095 "compare": false, 00:04:31.095 "compare_and_write": false, 00:04:31.095 "abort": true, 00:04:31.095 "seek_hole": false, 00:04:31.095 "seek_data": false, 00:04:31.095 "copy": true, 00:04:31.095 "nvme_iov_md": false 00:04:31.095 }, 00:04:31.095 "memory_domains": [ 00:04:31.095 { 00:04:31.095 "dma_device_id": "system", 00:04:31.095 "dma_device_type": 1 00:04:31.095 }, 00:04:31.095 { 00:04:31.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.095 "dma_device_type": 2 00:04:31.095 } 00:04:31.095 ], 00:04:31.095 "driver_specific": {} 00:04:31.095 }, 00:04:31.095 { 00:04:31.095 "name": "Passthru0", 00:04:31.095 "aliases": [ 00:04:31.095 "0557598a-c1c4-534b-8345-0d2d243285a6" 00:04:31.095 ], 00:04:31.095 "product_name": "passthru", 00:04:31.095 "block_size": 512, 00:04:31.095 "num_blocks": 16384, 00:04:31.095 "uuid": "0557598a-c1c4-534b-8345-0d2d243285a6", 00:04:31.095 "assigned_rate_limits": { 00:04:31.095 "rw_ios_per_sec": 0, 00:04:31.095 "rw_mbytes_per_sec": 0, 00:04:31.095 "r_mbytes_per_sec": 0, 00:04:31.095 "w_mbytes_per_sec": 0 00:04:31.095 }, 00:04:31.095 "claimed": false, 00:04:31.095 "zoned": false, 00:04:31.095 "supported_io_types": { 00:04:31.095 "read": true, 00:04:31.095 "write": true, 00:04:31.095 "unmap": true, 00:04:31.095 "flush": true, 00:04:31.095 "reset": true, 00:04:31.095 "nvme_admin": false, 00:04:31.095 "nvme_io": false, 00:04:31.095 "nvme_io_md": false, 00:04:31.095 "write_zeroes": true, 00:04:31.095 "zcopy": true, 00:04:31.095 "get_zone_info": false, 00:04:31.095 "zone_management": false, 00:04:31.095 "zone_append": false, 00:04:31.095 "compare": false, 00:04:31.095 "compare_and_write": false, 00:04:31.095 "abort": true, 00:04:31.095 "seek_hole": false, 00:04:31.095 "seek_data": false, 00:04:31.095 "copy": true, 00:04:31.095 "nvme_iov_md": false 00:04:31.095 }, 00:04:31.095 "memory_domains": [ 00:04:31.095 { 00:04:31.095 "dma_device_id": "system", 00:04:31.095 "dma_device_type": 1 00:04:31.095 }, 00:04:31.095 { 00:04:31.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.095 "dma_device_type": 2 00:04:31.095 } 00:04:31.095 ], 00:04:31.095 "driver_specific": { 00:04:31.095 "passthru": { 00:04:31.095 "name": "Passthru0", 00:04:31.095 "base_bdev_name": "Malloc2" 00:04:31.095 } 00:04:31.095 } 00:04:31.095 } 00:04:31.095 ]' 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.095 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.355 00:04:31.355 real 0m0.372s 00:04:31.355 user 0m0.191s 00:04:31.355 sys 0m0.064s 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.355 16:00:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.355 ************************************ 00:04:31.355 END TEST rpc_daemon_integrity 00:04:31.355 ************************************ 00:04:31.355 16:00:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.355 16:00:57 rpc -- rpc/rpc.sh@84 -- # killprocess 58804 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 58804 ']' 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@958 -- # kill -0 58804 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58804 00:04:31.355 killing process with pid 58804 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58804' 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@973 -- # kill 58804 00:04:31.355 16:00:57 rpc -- common/autotest_common.sh@978 -- # wait 58804 00:04:34.672 00:04:34.672 real 0m6.149s 00:04:34.672 user 0m6.497s 00:04:34.672 sys 0m1.191s 00:04:34.672 16:01:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.672 16:01:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.672 ************************************ 00:04:34.672 END TEST rpc 00:04:34.672 ************************************ 00:04:34.672 16:01:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:34.672 16:01:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.672 16:01:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.672 16:01:00 -- common/autotest_common.sh@10 -- # set +x 00:04:34.672 ************************************ 00:04:34.672 START TEST skip_rpc 00:04:34.672 ************************************ 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:34.672 * Looking for test storage... 00:04:34.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.672 16:01:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.672 --rc genhtml_branch_coverage=1 00:04:34.672 --rc genhtml_function_coverage=1 00:04:34.672 --rc genhtml_legend=1 00:04:34.672 --rc geninfo_all_blocks=1 00:04:34.672 --rc geninfo_unexecuted_blocks=1 00:04:34.672 00:04:34.672 ' 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.672 --rc genhtml_branch_coverage=1 00:04:34.672 --rc genhtml_function_coverage=1 00:04:34.672 --rc genhtml_legend=1 00:04:34.672 --rc geninfo_all_blocks=1 00:04:34.672 --rc geninfo_unexecuted_blocks=1 00:04:34.672 00:04:34.672 ' 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.672 --rc genhtml_branch_coverage=1 00:04:34.672 --rc genhtml_function_coverage=1 00:04:34.672 --rc genhtml_legend=1 00:04:34.672 --rc geninfo_all_blocks=1 00:04:34.672 --rc geninfo_unexecuted_blocks=1 00:04:34.672 00:04:34.672 ' 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.672 --rc genhtml_branch_coverage=1 00:04:34.672 --rc genhtml_function_coverage=1 00:04:34.672 --rc genhtml_legend=1 00:04:34.672 --rc geninfo_all_blocks=1 00:04:34.672 --rc geninfo_unexecuted_blocks=1 00:04:34.672 00:04:34.672 ' 00:04:34.672 16:01:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.672 16:01:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.672 16:01:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.672 16:01:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.673 ************************************ 00:04:34.673 START TEST skip_rpc 00:04:34.673 ************************************ 00:04:34.673 16:01:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:34.673 16:01:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59044 00:04:34.673 16:01:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.673 16:01:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.673 16:01:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.673 [2024-12-12 16:01:00.882432] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:34.673 [2024-12-12 16:01:00.882584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59044 ] 00:04:34.932 [2024-12-12 16:01:01.066777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.932 [2024-12-12 16:01:01.215983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59044 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59044 ']' 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59044 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59044 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59044' 00:04:40.207 killing process with pid 59044 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59044 00:04:40.207 16:01:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59044 00:04:42.746 00:04:42.746 real 0m7.852s 00:04:42.746 user 0m7.192s 00:04:42.746 sys 0m0.576s 00:04:42.746 16:01:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.746 16:01:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.746 ************************************ 00:04:42.746 END TEST skip_rpc 00:04:42.746 ************************************ 00:04:42.746 16:01:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.746 16:01:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.746 16:01:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.746 16:01:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.746 ************************************ 00:04:42.746 START TEST skip_rpc_with_json 00:04:42.746 ************************************ 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59148 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59148 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59148 ']' 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.747 16:01:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.747 [2024-12-12 16:01:08.801917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:42.747 [2024-12-12 16:01:08.802125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59148 ] 00:04:42.747 [2024-12-12 16:01:08.966758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.007 [2024-12-12 16:01:09.104449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.948 [2024-12-12 16:01:10.130751] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.948 request: 00:04:43.948 { 00:04:43.948 "trtype": "tcp", 00:04:43.948 "method": "nvmf_get_transports", 00:04:43.948 "req_id": 1 00:04:43.948 } 00:04:43.948 Got JSON-RPC error response 00:04:43.948 response: 00:04:43.948 { 00:04:43.948 "code": -19, 00:04:43.948 "message": "No such device" 00:04:43.948 } 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.948 [2024-12-12 16:01:10.142845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.948 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.208 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.208 16:01:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.208 { 00:04:44.208 "subsystems": [ 00:04:44.208 { 00:04:44.208 "subsystem": "fsdev", 00:04:44.208 "config": [ 00:04:44.208 { 00:04:44.208 "method": "fsdev_set_opts", 00:04:44.208 "params": { 00:04:44.208 "fsdev_io_pool_size": 65535, 00:04:44.208 "fsdev_io_cache_size": 256 00:04:44.208 } 00:04:44.208 } 00:04:44.208 ] 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "subsystem": "keyring", 00:04:44.208 "config": [] 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "subsystem": "iobuf", 00:04:44.208 "config": [ 00:04:44.208 { 00:04:44.208 "method": "iobuf_set_options", 00:04:44.208 "params": { 00:04:44.208 "small_pool_count": 8192, 00:04:44.208 "large_pool_count": 1024, 00:04:44.208 "small_bufsize": 8192, 00:04:44.208 "large_bufsize": 135168, 00:04:44.208 "enable_numa": false 00:04:44.208 } 00:04:44.208 } 00:04:44.208 ] 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "subsystem": "sock", 00:04:44.208 "config": [ 00:04:44.208 { 00:04:44.208 "method": "sock_set_default_impl", 00:04:44.208 "params": { 00:04:44.208 "impl_name": "posix" 00:04:44.208 } 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "method": "sock_impl_set_options", 00:04:44.208 "params": { 00:04:44.208 "impl_name": "ssl", 00:04:44.208 "recv_buf_size": 4096, 00:04:44.208 "send_buf_size": 4096, 00:04:44.208 "enable_recv_pipe": true, 00:04:44.208 "enable_quickack": false, 00:04:44.208 "enable_placement_id": 0, 00:04:44.208 "enable_zerocopy_send_server": true, 00:04:44.208 "enable_zerocopy_send_client": false, 00:04:44.208 "zerocopy_threshold": 0, 00:04:44.208 "tls_version": 0, 00:04:44.208 "enable_ktls": false 00:04:44.208 } 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "method": "sock_impl_set_options", 00:04:44.208 "params": { 00:04:44.208 "impl_name": "posix", 00:04:44.208 "recv_buf_size": 2097152, 00:04:44.208 "send_buf_size": 2097152, 00:04:44.208 "enable_recv_pipe": true, 00:04:44.208 "enable_quickack": false, 00:04:44.208 "enable_placement_id": 0, 00:04:44.208 "enable_zerocopy_send_server": true, 00:04:44.208 "enable_zerocopy_send_client": false, 00:04:44.208 "zerocopy_threshold": 0, 00:04:44.208 "tls_version": 0, 00:04:44.208 "enable_ktls": false 00:04:44.208 } 00:04:44.208 } 00:04:44.208 ] 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "subsystem": "vmd", 00:04:44.208 "config": [] 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "subsystem": "accel", 00:04:44.208 "config": [ 00:04:44.208 { 00:04:44.208 "method": "accel_set_options", 00:04:44.208 "params": { 00:04:44.208 "small_cache_size": 128, 00:04:44.208 "large_cache_size": 16, 00:04:44.208 "task_count": 2048, 00:04:44.208 "sequence_count": 2048, 00:04:44.208 "buf_count": 2048 00:04:44.208 } 00:04:44.208 } 00:04:44.208 ] 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "subsystem": "bdev", 00:04:44.208 "config": [ 00:04:44.208 { 00:04:44.208 "method": "bdev_set_options", 00:04:44.208 "params": { 00:04:44.208 "bdev_io_pool_size": 65535, 00:04:44.208 "bdev_io_cache_size": 256, 00:04:44.208 "bdev_auto_examine": true, 00:04:44.208 "iobuf_small_cache_size": 128, 00:04:44.208 "iobuf_large_cache_size": 16 00:04:44.208 } 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "method": "bdev_raid_set_options", 00:04:44.208 "params": { 00:04:44.208 "process_window_size_kb": 1024, 00:04:44.208 "process_max_bandwidth_mb_sec": 0 00:04:44.208 } 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "method": "bdev_iscsi_set_options", 00:04:44.208 "params": { 00:04:44.208 "timeout_sec": 30 00:04:44.208 } 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "method": "bdev_nvme_set_options", 00:04:44.208 "params": { 00:04:44.208 "action_on_timeout": "none", 00:04:44.208 "timeout_us": 0, 00:04:44.208 "timeout_admin_us": 0, 00:04:44.208 "keep_alive_timeout_ms": 10000, 00:04:44.208 "arbitration_burst": 0, 00:04:44.208 "low_priority_weight": 0, 00:04:44.208 "medium_priority_weight": 0, 00:04:44.208 "high_priority_weight": 0, 00:04:44.208 "nvme_adminq_poll_period_us": 10000, 00:04:44.208 "nvme_ioq_poll_period_us": 0, 00:04:44.208 "io_queue_requests": 0, 00:04:44.208 "delay_cmd_submit": true, 00:04:44.208 "transport_retry_count": 4, 00:04:44.208 "bdev_retry_count": 3, 00:04:44.208 "transport_ack_timeout": 0, 00:04:44.208 "ctrlr_loss_timeout_sec": 0, 00:04:44.208 "reconnect_delay_sec": 0, 00:04:44.208 "fast_io_fail_timeout_sec": 0, 00:04:44.208 "disable_auto_failback": false, 00:04:44.208 "generate_uuids": false, 00:04:44.208 "transport_tos": 0, 00:04:44.208 "nvme_error_stat": false, 00:04:44.208 "rdma_srq_size": 0, 00:04:44.208 "io_path_stat": false, 00:04:44.208 "allow_accel_sequence": false, 00:04:44.208 "rdma_max_cq_size": 0, 00:04:44.208 "rdma_cm_event_timeout_ms": 0, 00:04:44.208 "dhchap_digests": [ 00:04:44.208 "sha256", 00:04:44.208 "sha384", 00:04:44.208 "sha512" 00:04:44.208 ], 00:04:44.208 "dhchap_dhgroups": [ 00:04:44.208 "null", 00:04:44.208 "ffdhe2048", 00:04:44.208 "ffdhe3072", 00:04:44.208 "ffdhe4096", 00:04:44.208 "ffdhe6144", 00:04:44.208 "ffdhe8192" 00:04:44.208 ], 00:04:44.208 "rdma_umr_per_io": false 00:04:44.208 } 00:04:44.208 }, 00:04:44.208 { 00:04:44.208 "method": "bdev_nvme_set_hotplug", 00:04:44.208 "params": { 00:04:44.208 "period_us": 100000, 00:04:44.209 "enable": false 00:04:44.209 } 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "method": "bdev_wait_for_examine" 00:04:44.209 } 00:04:44.209 ] 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "scsi", 00:04:44.209 "config": null 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "scheduler", 00:04:44.209 "config": [ 00:04:44.209 { 00:04:44.209 "method": "framework_set_scheduler", 00:04:44.209 "params": { 00:04:44.209 "name": "static" 00:04:44.209 } 00:04:44.209 } 00:04:44.209 ] 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "vhost_scsi", 00:04:44.209 "config": [] 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "vhost_blk", 00:04:44.209 "config": [] 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "ublk", 00:04:44.209 "config": [] 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "nbd", 00:04:44.209 "config": [] 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "nvmf", 00:04:44.209 "config": [ 00:04:44.209 { 00:04:44.209 "method": "nvmf_set_config", 00:04:44.209 "params": { 00:04:44.209 "discovery_filter": "match_any", 00:04:44.209 "admin_cmd_passthru": { 00:04:44.209 "identify_ctrlr": false 00:04:44.209 }, 00:04:44.209 "dhchap_digests": [ 00:04:44.209 "sha256", 00:04:44.209 "sha384", 00:04:44.209 "sha512" 00:04:44.209 ], 00:04:44.209 "dhchap_dhgroups": [ 00:04:44.209 "null", 00:04:44.209 "ffdhe2048", 00:04:44.209 "ffdhe3072", 00:04:44.209 "ffdhe4096", 00:04:44.209 "ffdhe6144", 00:04:44.209 "ffdhe8192" 00:04:44.209 ] 00:04:44.209 } 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "method": "nvmf_set_max_subsystems", 00:04:44.209 "params": { 00:04:44.209 "max_subsystems": 1024 00:04:44.209 } 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "method": "nvmf_set_crdt", 00:04:44.209 "params": { 00:04:44.209 "crdt1": 0, 00:04:44.209 "crdt2": 0, 00:04:44.209 "crdt3": 0 00:04:44.209 } 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "method": "nvmf_create_transport", 00:04:44.209 "params": { 00:04:44.209 "trtype": "TCP", 00:04:44.209 "max_queue_depth": 128, 00:04:44.209 "max_io_qpairs_per_ctrlr": 127, 00:04:44.209 "in_capsule_data_size": 4096, 00:04:44.209 "max_io_size": 131072, 00:04:44.209 "io_unit_size": 131072, 00:04:44.209 "max_aq_depth": 128, 00:04:44.209 "num_shared_buffers": 511, 00:04:44.209 "buf_cache_size": 4294967295, 00:04:44.209 "dif_insert_or_strip": false, 00:04:44.209 "zcopy": false, 00:04:44.209 "c2h_success": true, 00:04:44.209 "sock_priority": 0, 00:04:44.209 "abort_timeout_sec": 1, 00:04:44.209 "ack_timeout": 0, 00:04:44.209 "data_wr_pool_size": 0 00:04:44.209 } 00:04:44.209 } 00:04:44.209 ] 00:04:44.209 }, 00:04:44.209 { 00:04:44.209 "subsystem": "iscsi", 00:04:44.209 "config": [ 00:04:44.209 { 00:04:44.209 "method": "iscsi_set_options", 00:04:44.209 "params": { 00:04:44.209 "node_base": "iqn.2016-06.io.spdk", 00:04:44.209 "max_sessions": 128, 00:04:44.209 "max_connections_per_session": 2, 00:04:44.209 "max_queue_depth": 64, 00:04:44.209 "default_time2wait": 2, 00:04:44.209 "default_time2retain": 20, 00:04:44.209 "first_burst_length": 8192, 00:04:44.209 "immediate_data": true, 00:04:44.209 "allow_duplicated_isid": false, 00:04:44.209 "error_recovery_level": 0, 00:04:44.209 "nop_timeout": 60, 00:04:44.209 "nop_in_interval": 30, 00:04:44.209 "disable_chap": false, 00:04:44.209 "require_chap": false, 00:04:44.209 "mutual_chap": false, 00:04:44.209 "chap_group": 0, 00:04:44.209 "max_large_datain_per_connection": 64, 00:04:44.209 "max_r2t_per_connection": 4, 00:04:44.209 "pdu_pool_size": 36864, 00:04:44.209 "immediate_data_pool_size": 16384, 00:04:44.209 "data_out_pool_size": 2048 00:04:44.209 } 00:04:44.209 } 00:04:44.209 ] 00:04:44.209 } 00:04:44.209 ] 00:04:44.209 } 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59148 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59148 ']' 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59148 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59148 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59148' 00:04:44.209 killing process with pid 59148 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59148 00:04:44.209 16:01:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59148 00:04:47.505 16:01:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59215 00:04:47.506 16:01:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.506 16:01:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59215 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59215 ']' 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59215 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59215 00:04:52.785 killing process with pid 59215 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59215' 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59215 00:04:52.785 16:01:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59215 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.690 00:04:54.690 real 0m12.220s 00:04:54.690 user 0m11.320s 00:04:54.690 sys 0m1.222s 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.690 ************************************ 00:04:54.690 END TEST skip_rpc_with_json 00:04:54.690 ************************************ 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.690 16:01:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.690 16:01:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.690 16:01:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.690 16:01:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.690 ************************************ 00:04:54.690 START TEST skip_rpc_with_delay 00:04:54.690 ************************************ 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.690 16:01:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.950 [2024-12-12 16:01:21.090169] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.951 16:01:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:54.951 16:01:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.951 16:01:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.951 16:01:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.951 00:04:54.951 real 0m0.182s 00:04:54.951 user 0m0.090s 00:04:54.951 sys 0m0.090s 00:04:54.951 ************************************ 00:04:54.951 END TEST skip_rpc_with_delay 00:04:54.951 ************************************ 00:04:54.951 16:01:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.951 16:01:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.951 16:01:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.951 16:01:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.951 16:01:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.951 16:01:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.951 16:01:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.951 16:01:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.951 ************************************ 00:04:54.951 START TEST exit_on_failed_rpc_init 00:04:54.951 ************************************ 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59349 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59349 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59349 ']' 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.951 16:01:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.211 [2024-12-12 16:01:21.341370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:55.211 [2024-12-12 16:01:21.341503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59349 ] 00:04:55.211 [2024-12-12 16:01:21.521221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.471 [2024-12-12 16:01:21.657023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.410 16:01:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.670 [2024-12-12 16:01:22.792395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:56.670 [2024-12-12 16:01:22.792600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:04:56.670 [2024-12-12 16:01:22.965608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.930 [2024-12-12 16:01:23.087126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.930 [2024-12-12 16:01:23.087307] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.930 [2024-12-12 16:01:23.087374] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.930 [2024-12-12 16:01:23.087398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59349 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59349 ']' 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59349 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59349 00:04:57.190 killing process with pid 59349 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59349' 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59349 00:04:57.190 16:01:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59349 00:05:00.481 00:05:00.481 real 0m4.878s 00:05:00.481 user 0m5.095s 00:05:00.481 sys 0m0.761s 00:05:00.481 16:01:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.481 16:01:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.481 ************************************ 00:05:00.481 END TEST exit_on_failed_rpc_init 00:05:00.481 ************************************ 00:05:00.481 16:01:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.481 ************************************ 00:05:00.481 END TEST skip_rpc 00:05:00.481 ************************************ 00:05:00.481 00:05:00.481 real 0m25.643s 00:05:00.481 user 0m23.906s 00:05:00.481 sys 0m2.968s 00:05:00.481 16:01:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.481 16:01:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.481 16:01:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.481 16:01:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.481 16:01:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.481 16:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:00.481 ************************************ 00:05:00.481 START TEST rpc_client 00:05:00.481 ************************************ 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.481 * Looking for test storage... 00:05:00.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.481 16:01:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.481 --rc genhtml_branch_coverage=1 00:05:00.481 --rc genhtml_function_coverage=1 00:05:00.481 --rc genhtml_legend=1 00:05:00.481 --rc geninfo_all_blocks=1 00:05:00.481 --rc geninfo_unexecuted_blocks=1 00:05:00.481 00:05:00.481 ' 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.481 --rc genhtml_branch_coverage=1 00:05:00.481 --rc genhtml_function_coverage=1 00:05:00.481 --rc genhtml_legend=1 00:05:00.481 --rc geninfo_all_blocks=1 00:05:00.481 --rc geninfo_unexecuted_blocks=1 00:05:00.481 00:05:00.481 ' 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.481 --rc genhtml_branch_coverage=1 00:05:00.481 --rc genhtml_function_coverage=1 00:05:00.481 --rc genhtml_legend=1 00:05:00.481 --rc geninfo_all_blocks=1 00:05:00.481 --rc geninfo_unexecuted_blocks=1 00:05:00.481 00:05:00.481 ' 00:05:00.481 16:01:26 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.481 --rc genhtml_branch_coverage=1 00:05:00.481 --rc genhtml_function_coverage=1 00:05:00.481 --rc genhtml_legend=1 00:05:00.481 --rc geninfo_all_blocks=1 00:05:00.482 --rc geninfo_unexecuted_blocks=1 00:05:00.482 00:05:00.482 ' 00:05:00.482 16:01:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:00.482 OK 00:05:00.482 16:01:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.482 ************************************ 00:05:00.482 END TEST rpc_client 00:05:00.482 ************************************ 00:05:00.482 00:05:00.482 real 0m0.301s 00:05:00.482 user 0m0.163s 00:05:00.482 sys 0m0.156s 00:05:00.482 16:01:26 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.482 16:01:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:00.482 16:01:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.482 16:01:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.482 16:01:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.482 16:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:00.482 ************************************ 00:05:00.482 START TEST json_config 00:05:00.482 ************************************ 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.482 16:01:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.482 16:01:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.482 16:01:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.482 16:01:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.482 16:01:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.482 16:01:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.482 16:01:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.482 16:01:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:00.482 16:01:26 json_config -- scripts/common.sh@345 -- # : 1 00:05:00.482 16:01:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.482 16:01:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.482 16:01:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:00.482 16:01:26 json_config -- scripts/common.sh@353 -- # local d=1 00:05:00.482 16:01:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.482 16:01:26 json_config -- scripts/common.sh@355 -- # echo 1 00:05:00.482 16:01:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.482 16:01:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@353 -- # local d=2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.482 16:01:26 json_config -- scripts/common.sh@355 -- # echo 2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.482 16:01:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.482 16:01:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.482 16:01:26 json_config -- scripts/common.sh@368 -- # return 0 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.482 --rc genhtml_branch_coverage=1 00:05:00.482 --rc genhtml_function_coverage=1 00:05:00.482 --rc genhtml_legend=1 00:05:00.482 --rc geninfo_all_blocks=1 00:05:00.482 --rc geninfo_unexecuted_blocks=1 00:05:00.482 00:05:00.482 ' 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.482 --rc genhtml_branch_coverage=1 00:05:00.482 --rc genhtml_function_coverage=1 00:05:00.482 --rc genhtml_legend=1 00:05:00.482 --rc geninfo_all_blocks=1 00:05:00.482 --rc geninfo_unexecuted_blocks=1 00:05:00.482 00:05:00.482 ' 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.482 --rc genhtml_branch_coverage=1 00:05:00.482 --rc genhtml_function_coverage=1 00:05:00.482 --rc genhtml_legend=1 00:05:00.482 --rc geninfo_all_blocks=1 00:05:00.482 --rc geninfo_unexecuted_blocks=1 00:05:00.482 00:05:00.482 ' 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.482 --rc genhtml_branch_coverage=1 00:05:00.482 --rc genhtml_function_coverage=1 00:05:00.482 --rc genhtml_legend=1 00:05:00.482 --rc geninfo_all_blocks=1 00:05:00.482 --rc geninfo_unexecuted_blocks=1 00:05:00.482 00:05:00.482 ' 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c218c69e-bbef-4c86-a86c-3bd5562bb564 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c218c69e-bbef-4c86-a86c-3bd5562bb564 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.482 16:01:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.482 16:01:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.482 16:01:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.482 16:01:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.482 16:01:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.482 16:01:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.482 16:01:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.482 16:01:26 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.482 16:01:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@51 -- # : 0 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.482 16:01:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:00.482 WARNING: No tests are enabled so not running JSON configuration tests 00:05:00.482 16:01:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:00.482 00:05:00.482 real 0m0.232s 00:05:00.482 user 0m0.144s 00:05:00.482 sys 0m0.092s 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.482 16:01:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.482 ************************************ 00:05:00.482 END TEST json_config 00:05:00.482 ************************************ 00:05:00.744 16:01:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:00.744 16:01:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.744 16:01:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.744 16:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:00.744 ************************************ 00:05:00.744 START TEST json_config_extra_key 00:05:00.744 ************************************ 00:05:00.744 16:01:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:00.744 16:01:26 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.744 16:01:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.744 16:01:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.744 16:01:27 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.744 16:01:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:00.744 16:01:27 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.744 16:01:27 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 16:01:27 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 16:01:27 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 16:01:27 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c218c69e-bbef-4c86-a86c-3bd5562bb564 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c218c69e-bbef-4c86-a86c-3bd5562bb564 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.744 16:01:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.745 16:01:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.745 16:01:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.004 16:01:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.004 16:01:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.004 16:01:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.004 16:01:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.004 16:01:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.004 16:01:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.004 16:01:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:01.004 16:01:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.004 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.004 16:01:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:01.004 INFO: launching applications... 00:05:01.004 16:01:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59582 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.004 Waiting for target to run... 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59582 /var/tmp/spdk_tgt.sock 00:05:01.004 16:01:27 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59582 ']' 00:05:01.004 16:01:27 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.004 16:01:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.004 16:01:27 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.004 16:01:27 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.004 16:01:27 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.004 16:01:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.004 [2024-12-12 16:01:27.214013] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:01.004 [2024-12-12 16:01:27.214213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:05:01.264 [2024-12-12 16:01:27.603972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.524 [2024-12-12 16:01:27.721929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.463 00:05:02.463 INFO: shutting down applications... 00:05:02.463 16:01:28 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.463 16:01:28 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:02.463 16:01:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:02.463 16:01:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59582 ]] 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59582 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59582 00:05:02.463 16:01:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.722 16:01:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.722 16:01:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.723 16:01:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59582 00:05:02.723 16:01:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.292 16:01:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.292 16:01:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.292 16:01:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59582 00:05:03.292 16:01:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.861 16:01:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.861 16:01:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.861 16:01:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59582 00:05:03.861 16:01:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.431 16:01:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.431 16:01:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.431 16:01:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59582 00:05:04.431 16:01:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.690 16:01:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.690 16:01:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.690 16:01:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59582 00:05:04.690 16:01:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.259 16:01:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.259 16:01:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.259 16:01:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59582 00:05:05.259 16:01:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.259 SPDK target shutdown done 00:05:05.259 Success 00:05:05.259 16:01:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:05.259 16:01:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.260 16:01:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.260 16:01:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:05.260 00:05:05.260 real 0m4.614s 00:05:05.260 user 0m4.321s 00:05:05.260 sys 0m0.623s 00:05:05.260 ************************************ 00:05:05.260 END TEST json_config_extra_key 00:05:05.260 ************************************ 00:05:05.260 16:01:31 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.260 16:01:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.260 16:01:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.260 16:01:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.260 16:01:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.260 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:05.260 ************************************ 00:05:05.260 START TEST alias_rpc 00:05:05.260 ************************************ 00:05:05.260 16:01:31 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.519 * Looking for test storage... 00:05:05.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:05.519 16:01:31 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.520 16:01:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.520 --rc genhtml_branch_coverage=1 00:05:05.520 --rc genhtml_function_coverage=1 00:05:05.520 --rc genhtml_legend=1 00:05:05.520 --rc geninfo_all_blocks=1 00:05:05.520 --rc geninfo_unexecuted_blocks=1 00:05:05.520 00:05:05.520 ' 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.520 --rc genhtml_branch_coverage=1 00:05:05.520 --rc genhtml_function_coverage=1 00:05:05.520 --rc genhtml_legend=1 00:05:05.520 --rc geninfo_all_blocks=1 00:05:05.520 --rc geninfo_unexecuted_blocks=1 00:05:05.520 00:05:05.520 ' 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.520 --rc genhtml_branch_coverage=1 00:05:05.520 --rc genhtml_function_coverage=1 00:05:05.520 --rc genhtml_legend=1 00:05:05.520 --rc geninfo_all_blocks=1 00:05:05.520 --rc geninfo_unexecuted_blocks=1 00:05:05.520 00:05:05.520 ' 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.520 --rc genhtml_branch_coverage=1 00:05:05.520 --rc genhtml_function_coverage=1 00:05:05.520 --rc genhtml_legend=1 00:05:05.520 --rc geninfo_all_blocks=1 00:05:05.520 --rc geninfo_unexecuted_blocks=1 00:05:05.520 00:05:05.520 ' 00:05:05.520 16:01:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.520 16:01:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59694 00:05:05.520 16:01:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.520 16:01:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59694 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59694 ']' 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.520 16:01:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.780 [2024-12-12 16:01:31.892559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:05.780 [2024-12-12 16:01:31.892755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59694 ] 00:05:05.780 [2024-12-12 16:01:32.068781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.039 [2024-12-12 16:01:32.214509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.977 16:01:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.977 16:01:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.977 16:01:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:07.237 16:01:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59694 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59694 ']' 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59694 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59694 00:05:07.237 killing process with pid 59694 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59694' 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@973 -- # kill 59694 00:05:07.237 16:01:33 alias_rpc -- common/autotest_common.sh@978 -- # wait 59694 00:05:10.533 ************************************ 00:05:10.533 END TEST alias_rpc 00:05:10.533 ************************************ 00:05:10.533 00:05:10.533 real 0m4.672s 00:05:10.533 user 0m4.486s 00:05:10.533 sys 0m0.754s 00:05:10.533 16:01:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.533 16:01:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.533 16:01:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.533 16:01:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.533 16:01:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.533 16:01:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.533 16:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:10.533 ************************************ 00:05:10.533 START TEST spdkcli_tcp 00:05:10.533 ************************************ 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.533 * Looking for test storage... 00:05:10.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.533 16:01:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.533 --rc genhtml_branch_coverage=1 00:05:10.533 --rc genhtml_function_coverage=1 00:05:10.533 --rc genhtml_legend=1 00:05:10.533 --rc geninfo_all_blocks=1 00:05:10.533 --rc geninfo_unexecuted_blocks=1 00:05:10.533 00:05:10.533 ' 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.533 --rc genhtml_branch_coverage=1 00:05:10.533 --rc genhtml_function_coverage=1 00:05:10.533 --rc genhtml_legend=1 00:05:10.533 --rc geninfo_all_blocks=1 00:05:10.533 --rc geninfo_unexecuted_blocks=1 00:05:10.533 00:05:10.533 ' 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.533 --rc genhtml_branch_coverage=1 00:05:10.533 --rc genhtml_function_coverage=1 00:05:10.533 --rc genhtml_legend=1 00:05:10.533 --rc geninfo_all_blocks=1 00:05:10.533 --rc geninfo_unexecuted_blocks=1 00:05:10.533 00:05:10.533 ' 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.533 --rc genhtml_branch_coverage=1 00:05:10.533 --rc genhtml_function_coverage=1 00:05:10.533 --rc genhtml_legend=1 00:05:10.533 --rc geninfo_all_blocks=1 00:05:10.533 --rc geninfo_unexecuted_blocks=1 00:05:10.533 00:05:10.533 ' 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59806 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.533 16:01:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59806 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59806 ']' 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.533 16:01:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.533 [2024-12-12 16:01:36.661102] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:10.533 [2024-12-12 16:01:36.661350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59806 ] 00:05:10.533 [2024-12-12 16:01:36.834316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.793 [2024-12-12 16:01:36.975927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.793 [2024-12-12 16:01:36.976007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.731 16:01:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.731 16:01:37 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:11.731 16:01:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59829 00:05:11.731 16:01:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.731 16:01:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.991 [ 00:05:11.991 "bdev_malloc_delete", 00:05:11.991 "bdev_malloc_create", 00:05:11.991 "bdev_null_resize", 00:05:11.991 "bdev_null_delete", 00:05:11.991 "bdev_null_create", 00:05:11.991 "bdev_nvme_cuse_unregister", 00:05:11.991 "bdev_nvme_cuse_register", 00:05:11.991 "bdev_opal_new_user", 00:05:11.991 "bdev_opal_set_lock_state", 00:05:11.991 "bdev_opal_delete", 00:05:11.991 "bdev_opal_get_info", 00:05:11.991 "bdev_opal_create", 00:05:11.991 "bdev_nvme_opal_revert", 00:05:11.991 "bdev_nvme_opal_init", 00:05:11.991 "bdev_nvme_send_cmd", 00:05:11.991 "bdev_nvme_set_keys", 00:05:11.991 "bdev_nvme_get_path_iostat", 00:05:11.991 "bdev_nvme_get_mdns_discovery_info", 00:05:11.991 "bdev_nvme_stop_mdns_discovery", 00:05:11.991 "bdev_nvme_start_mdns_discovery", 00:05:11.991 "bdev_nvme_set_multipath_policy", 00:05:11.991 "bdev_nvme_set_preferred_path", 00:05:11.991 "bdev_nvme_get_io_paths", 00:05:11.991 "bdev_nvme_remove_error_injection", 00:05:11.991 "bdev_nvme_add_error_injection", 00:05:11.991 "bdev_nvme_get_discovery_info", 00:05:11.991 "bdev_nvme_stop_discovery", 00:05:11.991 "bdev_nvme_start_discovery", 00:05:11.991 "bdev_nvme_get_controller_health_info", 00:05:11.991 "bdev_nvme_disable_controller", 00:05:11.991 "bdev_nvme_enable_controller", 00:05:11.991 "bdev_nvme_reset_controller", 00:05:11.991 "bdev_nvme_get_transport_statistics", 00:05:11.991 "bdev_nvme_apply_firmware", 00:05:11.991 "bdev_nvme_detach_controller", 00:05:11.991 "bdev_nvme_get_controllers", 00:05:11.991 "bdev_nvme_attach_controller", 00:05:11.991 "bdev_nvme_set_hotplug", 00:05:11.991 "bdev_nvme_set_options", 00:05:11.991 "bdev_passthru_delete", 00:05:11.991 "bdev_passthru_create", 00:05:11.991 "bdev_lvol_set_parent_bdev", 00:05:11.991 "bdev_lvol_set_parent", 00:05:11.991 "bdev_lvol_check_shallow_copy", 00:05:11.991 "bdev_lvol_start_shallow_copy", 00:05:11.991 "bdev_lvol_grow_lvstore", 00:05:11.991 "bdev_lvol_get_lvols", 00:05:11.991 "bdev_lvol_get_lvstores", 00:05:11.991 "bdev_lvol_delete", 00:05:11.991 "bdev_lvol_set_read_only", 00:05:11.991 "bdev_lvol_resize", 00:05:11.991 "bdev_lvol_decouple_parent", 00:05:11.991 "bdev_lvol_inflate", 00:05:11.991 "bdev_lvol_rename", 00:05:11.991 "bdev_lvol_clone_bdev", 00:05:11.991 "bdev_lvol_clone", 00:05:11.991 "bdev_lvol_snapshot", 00:05:11.991 "bdev_lvol_create", 00:05:11.991 "bdev_lvol_delete_lvstore", 00:05:11.991 "bdev_lvol_rename_lvstore", 00:05:11.991 "bdev_lvol_create_lvstore", 00:05:11.991 "bdev_raid_set_options", 00:05:11.991 "bdev_raid_remove_base_bdev", 00:05:11.991 "bdev_raid_add_base_bdev", 00:05:11.991 "bdev_raid_delete", 00:05:11.991 "bdev_raid_create", 00:05:11.991 "bdev_raid_get_bdevs", 00:05:11.991 "bdev_error_inject_error", 00:05:11.991 "bdev_error_delete", 00:05:11.991 "bdev_error_create", 00:05:11.991 "bdev_split_delete", 00:05:11.991 "bdev_split_create", 00:05:11.991 "bdev_delay_delete", 00:05:11.991 "bdev_delay_create", 00:05:11.992 "bdev_delay_update_latency", 00:05:11.992 "bdev_zone_block_delete", 00:05:11.992 "bdev_zone_block_create", 00:05:11.992 "blobfs_create", 00:05:11.992 "blobfs_detect", 00:05:11.992 "blobfs_set_cache_size", 00:05:11.992 "bdev_aio_delete", 00:05:11.992 "bdev_aio_rescan", 00:05:11.992 "bdev_aio_create", 00:05:11.992 "bdev_ftl_set_property", 00:05:11.992 "bdev_ftl_get_properties", 00:05:11.992 "bdev_ftl_get_stats", 00:05:11.992 "bdev_ftl_unmap", 00:05:11.992 "bdev_ftl_unload", 00:05:11.992 "bdev_ftl_delete", 00:05:11.992 "bdev_ftl_load", 00:05:11.992 "bdev_ftl_create", 00:05:11.992 "bdev_virtio_attach_controller", 00:05:11.992 "bdev_virtio_scsi_get_devices", 00:05:11.992 "bdev_virtio_detach_controller", 00:05:11.992 "bdev_virtio_blk_set_hotplug", 00:05:11.992 "bdev_iscsi_delete", 00:05:11.992 "bdev_iscsi_create", 00:05:11.992 "bdev_iscsi_set_options", 00:05:11.992 "accel_error_inject_error", 00:05:11.992 "ioat_scan_accel_module", 00:05:11.992 "dsa_scan_accel_module", 00:05:11.992 "iaa_scan_accel_module", 00:05:11.992 "keyring_file_remove_key", 00:05:11.992 "keyring_file_add_key", 00:05:11.992 "keyring_linux_set_options", 00:05:11.992 "fsdev_aio_delete", 00:05:11.992 "fsdev_aio_create", 00:05:11.992 "iscsi_get_histogram", 00:05:11.992 "iscsi_enable_histogram", 00:05:11.992 "iscsi_set_options", 00:05:11.992 "iscsi_get_auth_groups", 00:05:11.992 "iscsi_auth_group_remove_secret", 00:05:11.992 "iscsi_auth_group_add_secret", 00:05:11.992 "iscsi_delete_auth_group", 00:05:11.992 "iscsi_create_auth_group", 00:05:11.992 "iscsi_set_discovery_auth", 00:05:11.992 "iscsi_get_options", 00:05:11.992 "iscsi_target_node_request_logout", 00:05:11.992 "iscsi_target_node_set_redirect", 00:05:11.992 "iscsi_target_node_set_auth", 00:05:11.992 "iscsi_target_node_add_lun", 00:05:11.992 "iscsi_get_stats", 00:05:11.992 "iscsi_get_connections", 00:05:11.992 "iscsi_portal_group_set_auth", 00:05:11.992 "iscsi_start_portal_group", 00:05:11.992 "iscsi_delete_portal_group", 00:05:11.992 "iscsi_create_portal_group", 00:05:11.992 "iscsi_get_portal_groups", 00:05:11.992 "iscsi_delete_target_node", 00:05:11.992 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.992 "iscsi_target_node_add_pg_ig_maps", 00:05:11.992 "iscsi_create_target_node", 00:05:11.992 "iscsi_get_target_nodes", 00:05:11.992 "iscsi_delete_initiator_group", 00:05:11.992 "iscsi_initiator_group_remove_initiators", 00:05:11.992 "iscsi_initiator_group_add_initiators", 00:05:11.992 "iscsi_create_initiator_group", 00:05:11.992 "iscsi_get_initiator_groups", 00:05:11.992 "nvmf_set_crdt", 00:05:11.992 "nvmf_set_config", 00:05:11.992 "nvmf_set_max_subsystems", 00:05:11.992 "nvmf_stop_mdns_prr", 00:05:11.992 "nvmf_publish_mdns_prr", 00:05:11.992 "nvmf_subsystem_get_listeners", 00:05:11.992 "nvmf_subsystem_get_qpairs", 00:05:11.992 "nvmf_subsystem_get_controllers", 00:05:11.992 "nvmf_get_stats", 00:05:11.992 "nvmf_get_transports", 00:05:11.992 "nvmf_create_transport", 00:05:11.992 "nvmf_get_targets", 00:05:11.992 "nvmf_delete_target", 00:05:11.992 "nvmf_create_target", 00:05:11.992 "nvmf_subsystem_allow_any_host", 00:05:11.992 "nvmf_subsystem_set_keys", 00:05:11.992 "nvmf_subsystem_remove_host", 00:05:11.992 "nvmf_subsystem_add_host", 00:05:11.992 "nvmf_ns_remove_host", 00:05:11.992 "nvmf_ns_add_host", 00:05:11.992 "nvmf_subsystem_remove_ns", 00:05:11.992 "nvmf_subsystem_set_ns_ana_group", 00:05:11.992 "nvmf_subsystem_add_ns", 00:05:11.992 "nvmf_subsystem_listener_set_ana_state", 00:05:11.992 "nvmf_discovery_get_referrals", 00:05:11.992 "nvmf_discovery_remove_referral", 00:05:11.992 "nvmf_discovery_add_referral", 00:05:11.992 "nvmf_subsystem_remove_listener", 00:05:11.992 "nvmf_subsystem_add_listener", 00:05:11.992 "nvmf_delete_subsystem", 00:05:11.992 "nvmf_create_subsystem", 00:05:11.992 "nvmf_get_subsystems", 00:05:11.992 "env_dpdk_get_mem_stats", 00:05:11.992 "nbd_get_disks", 00:05:11.992 "nbd_stop_disk", 00:05:11.992 "nbd_start_disk", 00:05:11.992 "ublk_recover_disk", 00:05:11.992 "ublk_get_disks", 00:05:11.992 "ublk_stop_disk", 00:05:11.992 "ublk_start_disk", 00:05:11.992 "ublk_destroy_target", 00:05:11.992 "ublk_create_target", 00:05:11.992 "virtio_blk_create_transport", 00:05:11.992 "virtio_blk_get_transports", 00:05:11.992 "vhost_controller_set_coalescing", 00:05:11.992 "vhost_get_controllers", 00:05:11.992 "vhost_delete_controller", 00:05:11.992 "vhost_create_blk_controller", 00:05:11.992 "vhost_scsi_controller_remove_target", 00:05:11.992 "vhost_scsi_controller_add_target", 00:05:11.992 "vhost_start_scsi_controller", 00:05:11.992 "vhost_create_scsi_controller", 00:05:11.992 "thread_set_cpumask", 00:05:11.992 "scheduler_set_options", 00:05:11.992 "framework_get_governor", 00:05:11.992 "framework_get_scheduler", 00:05:11.992 "framework_set_scheduler", 00:05:11.992 "framework_get_reactors", 00:05:11.992 "thread_get_io_channels", 00:05:11.992 "thread_get_pollers", 00:05:11.992 "thread_get_stats", 00:05:11.992 "framework_monitor_context_switch", 00:05:11.992 "spdk_kill_instance", 00:05:11.992 "log_enable_timestamps", 00:05:11.992 "log_get_flags", 00:05:11.992 "log_clear_flag", 00:05:11.992 "log_set_flag", 00:05:11.992 "log_get_level", 00:05:11.992 "log_set_level", 00:05:11.992 "log_get_print_level", 00:05:11.992 "log_set_print_level", 00:05:11.992 "framework_enable_cpumask_locks", 00:05:11.992 "framework_disable_cpumask_locks", 00:05:11.992 "framework_wait_init", 00:05:11.992 "framework_start_init", 00:05:11.992 "scsi_get_devices", 00:05:11.992 "bdev_get_histogram", 00:05:11.992 "bdev_enable_histogram", 00:05:11.992 "bdev_set_qos_limit", 00:05:11.992 "bdev_set_qd_sampling_period", 00:05:11.992 "bdev_get_bdevs", 00:05:11.992 "bdev_reset_iostat", 00:05:11.992 "bdev_get_iostat", 00:05:11.992 "bdev_examine", 00:05:11.992 "bdev_wait_for_examine", 00:05:11.992 "bdev_set_options", 00:05:11.992 "accel_get_stats", 00:05:11.992 "accel_set_options", 00:05:11.992 "accel_set_driver", 00:05:11.992 "accel_crypto_key_destroy", 00:05:11.992 "accel_crypto_keys_get", 00:05:11.992 "accel_crypto_key_create", 00:05:11.992 "accel_assign_opc", 00:05:11.992 "accel_get_module_info", 00:05:11.992 "accel_get_opc_assignments", 00:05:11.992 "vmd_rescan", 00:05:11.992 "vmd_remove_device", 00:05:11.992 "vmd_enable", 00:05:11.992 "sock_get_default_impl", 00:05:11.992 "sock_set_default_impl", 00:05:11.992 "sock_impl_set_options", 00:05:11.992 "sock_impl_get_options", 00:05:11.992 "iobuf_get_stats", 00:05:11.992 "iobuf_set_options", 00:05:11.992 "keyring_get_keys", 00:05:11.992 "framework_get_pci_devices", 00:05:11.992 "framework_get_config", 00:05:11.992 "framework_get_subsystems", 00:05:11.992 "fsdev_set_opts", 00:05:11.992 "fsdev_get_opts", 00:05:11.992 "trace_get_info", 00:05:11.992 "trace_get_tpoint_group_mask", 00:05:11.992 "trace_disable_tpoint_group", 00:05:11.992 "trace_enable_tpoint_group", 00:05:11.992 "trace_clear_tpoint_mask", 00:05:11.992 "trace_set_tpoint_mask", 00:05:11.992 "notify_get_notifications", 00:05:11.992 "notify_get_types", 00:05:11.992 "spdk_get_version", 00:05:11.992 "rpc_get_methods" 00:05:11.992 ] 00:05:11.992 16:01:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.992 16:01:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.992 16:01:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59806 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59806 ']' 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59806 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59806 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59806' 00:05:11.992 killing process with pid 59806 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59806 00:05:11.992 16:01:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59806 00:05:15.286 00:05:15.286 real 0m4.639s 00:05:15.286 user 0m8.060s 00:05:15.286 sys 0m0.841s 00:05:15.286 ************************************ 00:05:15.286 END TEST spdkcli_tcp 00:05:15.286 ************************************ 00:05:15.286 16:01:40 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.286 16:01:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.286 16:01:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.286 16:01:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.286 16:01:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.286 16:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.286 ************************************ 00:05:15.286 START TEST dpdk_mem_utility 00:05:15.286 ************************************ 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.286 * Looking for test storage... 00:05:15.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.286 16:01:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.286 --rc genhtml_branch_coverage=1 00:05:15.286 --rc genhtml_function_coverage=1 00:05:15.286 --rc genhtml_legend=1 00:05:15.286 --rc geninfo_all_blocks=1 00:05:15.286 --rc geninfo_unexecuted_blocks=1 00:05:15.286 00:05:15.286 ' 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.286 --rc genhtml_branch_coverage=1 00:05:15.286 --rc genhtml_function_coverage=1 00:05:15.286 --rc genhtml_legend=1 00:05:15.286 --rc geninfo_all_blocks=1 00:05:15.286 --rc geninfo_unexecuted_blocks=1 00:05:15.286 00:05:15.286 ' 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.286 --rc genhtml_branch_coverage=1 00:05:15.286 --rc genhtml_function_coverage=1 00:05:15.286 --rc genhtml_legend=1 00:05:15.286 --rc geninfo_all_blocks=1 00:05:15.286 --rc geninfo_unexecuted_blocks=1 00:05:15.286 00:05:15.286 ' 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.286 --rc genhtml_branch_coverage=1 00:05:15.286 --rc genhtml_function_coverage=1 00:05:15.286 --rc genhtml_legend=1 00:05:15.286 --rc geninfo_all_blocks=1 00:05:15.286 --rc geninfo_unexecuted_blocks=1 00:05:15.286 00:05:15.286 ' 00:05:15.286 16:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:15.286 16:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59934 00:05:15.286 16:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.286 16:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59934 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59934 ']' 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.286 16:01:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.286 [2024-12-12 16:01:41.346277] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:15.286 [2024-12-12 16:01:41.346509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59934 ] 00:05:15.286 [2024-12-12 16:01:41.527260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.546 [2024-12-12 16:01:41.659818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.487 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.487 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:16.487 16:01:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.487 16:01:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.487 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.487 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.487 { 00:05:16.487 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.487 } 00:05:16.487 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.487 16:01:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.487 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:16.487 1 heaps totaling size 824.000000 MiB 00:05:16.487 size: 824.000000 MiB heap id: 0 00:05:16.487 end heaps---------- 00:05:16.487 9 mempools totaling size 603.782043 MiB 00:05:16.487 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.487 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.487 size: 100.555481 MiB name: bdev_io_59934 00:05:16.487 size: 50.003479 MiB name: msgpool_59934 00:05:16.487 size: 36.509338 MiB name: fsdev_io_59934 00:05:16.487 size: 21.763794 MiB name: PDU_Pool 00:05:16.487 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.487 size: 4.133484 MiB name: evtpool_59934 00:05:16.487 size: 0.026123 MiB name: Session_Pool 00:05:16.487 end mempools------- 00:05:16.487 6 memzones totaling size 4.142822 MiB 00:05:16.487 size: 1.000366 MiB name: RG_ring_0_59934 00:05:16.487 size: 1.000366 MiB name: RG_ring_1_59934 00:05:16.487 size: 1.000366 MiB name: RG_ring_4_59934 00:05:16.487 size: 1.000366 MiB name: RG_ring_5_59934 00:05:16.487 size: 0.125366 MiB name: RG_ring_2_59934 00:05:16.487 size: 0.015991 MiB name: RG_ring_3_59934 00:05:16.487 end memzones------- 00:05:16.487 16:01:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.487 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:05:16.487 list of free elements. size: 16.781372 MiB 00:05:16.487 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:16.487 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:16.487 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:16.487 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:16.487 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:16.487 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:16.487 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:16.487 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:16.487 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:16.487 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:16.487 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:16.487 element at address: 0x20001b400000 with size: 0.562927 MiB 00:05:16.487 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:16.487 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:16.487 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:16.487 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:16.487 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:16.487 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:16.487 list of standard malloc elements. size: 199.287720 MiB 00:05:16.487 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:16.487 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:16.487 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:16.487 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:16.487 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:16.487 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:16.487 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:16.487 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:16.487 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:16.487 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:16.487 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:16.487 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:16.487 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:16.487 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:16.487 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:16.488 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:16.489 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:16.489 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:16.489 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:16.489 list of memzone associated elements. size: 607.930908 MiB 00:05:16.489 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:16.489 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.489 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:16.489 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.489 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:16.489 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59934_0 00:05:16.489 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:16.489 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59934_0 00:05:16.489 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:16.489 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59934_0 00:05:16.489 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:16.489 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.489 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:16.489 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.489 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:16.489 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59934_0 00:05:16.489 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:16.489 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59934 00:05:16.489 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:16.490 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59934 00:05:16.490 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:16.490 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.490 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:16.490 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.490 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:16.490 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.490 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:16.490 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.490 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:16.490 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59934 00:05:16.490 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:16.490 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59934 00:05:16.490 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:16.490 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59934 00:05:16.490 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:16.490 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59934 00:05:16.490 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:16.490 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59934 00:05:16.490 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:16.490 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59934 00:05:16.490 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:16.490 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.490 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:16.490 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.490 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:16.490 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.490 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:16.490 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59934 00:05:16.490 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:16.490 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59934 00:05:16.490 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:16.490 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.490 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:16.490 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.490 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:16.490 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59934 00:05:16.490 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:16.490 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.490 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:16.490 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59934 00:05:16.490 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:16.490 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59934 00:05:16.490 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:16.490 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59934 00:05:16.490 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:16.490 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.490 16:01:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.490 16:01:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59934 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59934 ']' 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59934 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59934 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59934' 00:05:16.490 killing process with pid 59934 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59934 00:05:16.490 16:01:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59934 00:05:19.787 00:05:19.787 real 0m4.455s 00:05:19.787 user 0m4.176s 00:05:19.787 sys 0m0.745s 00:05:19.787 16:01:45 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.787 16:01:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.787 ************************************ 00:05:19.787 END TEST dpdk_mem_utility 00:05:19.787 ************************************ 00:05:19.787 16:01:45 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:19.787 16:01:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.787 16:01:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.787 16:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:19.787 ************************************ 00:05:19.787 START TEST event 00:05:19.787 ************************************ 00:05:19.787 16:01:45 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:19.787 * Looking for test storage... 00:05:19.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:19.787 16:01:45 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.787 16:01:45 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.787 16:01:45 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.787 16:01:45 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.787 16:01:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.787 16:01:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.787 16:01:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.787 16:01:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.787 16:01:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.787 16:01:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.787 16:01:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.787 16:01:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.787 16:01:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.787 16:01:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.787 16:01:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.787 16:01:45 event -- scripts/common.sh@344 -- # case "$op" in 00:05:19.787 16:01:45 event -- scripts/common.sh@345 -- # : 1 00:05:19.787 16:01:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.787 16:01:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.787 16:01:45 event -- scripts/common.sh@365 -- # decimal 1 00:05:19.787 16:01:45 event -- scripts/common.sh@353 -- # local d=1 00:05:19.788 16:01:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.788 16:01:45 event -- scripts/common.sh@355 -- # echo 1 00:05:19.788 16:01:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.788 16:01:45 event -- scripts/common.sh@366 -- # decimal 2 00:05:19.788 16:01:45 event -- scripts/common.sh@353 -- # local d=2 00:05:19.788 16:01:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.788 16:01:45 event -- scripts/common.sh@355 -- # echo 2 00:05:19.788 16:01:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.788 16:01:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.788 16:01:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.788 16:01:45 event -- scripts/common.sh@368 -- # return 0 00:05:19.788 16:01:45 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.788 16:01:45 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.788 --rc genhtml_branch_coverage=1 00:05:19.788 --rc genhtml_function_coverage=1 00:05:19.788 --rc genhtml_legend=1 00:05:19.788 --rc geninfo_all_blocks=1 00:05:19.788 --rc geninfo_unexecuted_blocks=1 00:05:19.788 00:05:19.788 ' 00:05:19.788 16:01:45 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.788 --rc genhtml_branch_coverage=1 00:05:19.788 --rc genhtml_function_coverage=1 00:05:19.788 --rc genhtml_legend=1 00:05:19.788 --rc geninfo_all_blocks=1 00:05:19.788 --rc geninfo_unexecuted_blocks=1 00:05:19.788 00:05:19.788 ' 00:05:19.788 16:01:45 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.788 --rc genhtml_branch_coverage=1 00:05:19.788 --rc genhtml_function_coverage=1 00:05:19.788 --rc genhtml_legend=1 00:05:19.788 --rc geninfo_all_blocks=1 00:05:19.788 --rc geninfo_unexecuted_blocks=1 00:05:19.788 00:05:19.788 ' 00:05:19.788 16:01:45 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.788 --rc genhtml_branch_coverage=1 00:05:19.788 --rc genhtml_function_coverage=1 00:05:19.788 --rc genhtml_legend=1 00:05:19.788 --rc geninfo_all_blocks=1 00:05:19.788 --rc geninfo_unexecuted_blocks=1 00:05:19.788 00:05:19.788 ' 00:05:19.788 16:01:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:19.788 16:01:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:19.788 16:01:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.788 16:01:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:19.788 16:01:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.788 16:01:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.788 ************************************ 00:05:19.788 START TEST event_perf 00:05:19.788 ************************************ 00:05:19.788 16:01:45 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.788 Running I/O for 1 seconds...[2024-12-12 16:01:45.825691] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:19.788 [2024-12-12 16:01:45.825791] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60048 ] 00:05:19.788 [2024-12-12 16:01:46.003041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.048 [2024-12-12 16:01:46.153242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.048 [2024-12-12 16:01:46.153556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.048 [2024-12-12 16:01:46.153568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.048 Running I/O for 1 seconds...[2024-12-12 16:01:46.153442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.430 00:05:21.430 lcore 0: 107209 00:05:21.430 lcore 1: 107211 00:05:21.430 lcore 2: 107210 00:05:21.430 lcore 3: 107209 00:05:21.430 done. 00:05:21.430 00:05:21.430 real 0m1.639s 00:05:21.430 user 0m4.379s 00:05:21.430 sys 0m0.135s 00:05:21.430 16:01:47 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.430 16:01:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.430 ************************************ 00:05:21.430 END TEST event_perf 00:05:21.430 ************************************ 00:05:21.430 16:01:47 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:21.430 16:01:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.430 16:01:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.430 16:01:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.430 ************************************ 00:05:21.430 START TEST event_reactor 00:05:21.430 ************************************ 00:05:21.430 16:01:47 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:21.430 [2024-12-12 16:01:47.535977] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:21.430 [2024-12-12 16:01:47.536149] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60088 ] 00:05:21.430 [2024-12-12 16:01:47.707585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.691 [2024-12-12 16:01:47.848509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.075 test_start 00:05:23.075 oneshot 00:05:23.075 tick 100 00:05:23.075 tick 100 00:05:23.075 tick 250 00:05:23.075 tick 100 00:05:23.075 tick 100 00:05:23.075 tick 100 00:05:23.075 tick 250 00:05:23.075 tick 500 00:05:23.075 tick 100 00:05:23.075 tick 100 00:05:23.075 tick 250 00:05:23.075 tick 100 00:05:23.075 tick 100 00:05:23.075 test_end 00:05:23.075 ************************************ 00:05:23.075 END TEST event_reactor 00:05:23.075 ************************************ 00:05:23.075 00:05:23.075 real 0m1.615s 00:05:23.075 user 0m1.402s 00:05:23.075 sys 0m0.105s 00:05:23.075 16:01:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.075 16:01:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.075 16:01:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.075 16:01:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:23.075 16:01:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.075 16:01:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.075 ************************************ 00:05:23.075 START TEST event_reactor_perf 00:05:23.075 ************************************ 00:05:23.076 16:01:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.076 [2024-12-12 16:01:49.216300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:23.076 [2024-12-12 16:01:49.216421] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60130 ] 00:05:23.076 [2024-12-12 16:01:49.394278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.336 [2024-12-12 16:01:49.536975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.720 test_start 00:05:24.720 test_end 00:05:24.720 Performance: 381157 events per second 00:05:24.720 00:05:24.720 real 0m1.605s 00:05:24.720 user 0m1.406s 00:05:24.720 sys 0m0.092s 00:05:24.720 16:01:50 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.720 16:01:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.720 ************************************ 00:05:24.720 END TEST event_reactor_perf 00:05:24.720 ************************************ 00:05:24.720 16:01:50 event -- event/event.sh@49 -- # uname -s 00:05:24.720 16:01:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.720 16:01:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:24.720 16:01:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.720 16:01:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.720 16:01:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.720 ************************************ 00:05:24.720 START TEST event_scheduler 00:05:24.720 ************************************ 00:05:24.720 16:01:50 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:24.720 * Looking for test storage... 00:05:24.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:24.720 16:01:50 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.720 16:01:50 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.720 16:01:50 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.720 16:01:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.720 16:01:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.980 16:01:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.980 16:01:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.980 16:01:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.980 16:01:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.980 16:01:51 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.980 16:01:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.980 --rc genhtml_branch_coverage=1 00:05:24.980 --rc genhtml_function_coverage=1 00:05:24.980 --rc genhtml_legend=1 00:05:24.980 --rc geninfo_all_blocks=1 00:05:24.980 --rc geninfo_unexecuted_blocks=1 00:05:24.980 00:05:24.980 ' 00:05:24.980 16:01:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.980 --rc genhtml_branch_coverage=1 00:05:24.980 --rc genhtml_function_coverage=1 00:05:24.980 --rc genhtml_legend=1 00:05:24.980 --rc geninfo_all_blocks=1 00:05:24.980 --rc geninfo_unexecuted_blocks=1 00:05:24.980 00:05:24.980 ' 00:05:24.980 16:01:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.980 --rc genhtml_branch_coverage=1 00:05:24.980 --rc genhtml_function_coverage=1 00:05:24.980 --rc genhtml_legend=1 00:05:24.980 --rc geninfo_all_blocks=1 00:05:24.980 --rc geninfo_unexecuted_blocks=1 00:05:24.980 00:05:24.980 ' 00:05:24.980 16:01:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.980 --rc genhtml_branch_coverage=1 00:05:24.980 --rc genhtml_function_coverage=1 00:05:24.980 --rc genhtml_legend=1 00:05:24.980 --rc geninfo_all_blocks=1 00:05:24.980 --rc geninfo_unexecuted_blocks=1 00:05:24.980 00:05:24.980 ' 00:05:24.981 16:01:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.981 16:01:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60206 00:05:24.981 16:01:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.981 16:01:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.981 16:01:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60206 00:05:24.981 16:01:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60206 ']' 00:05:24.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.981 16:01:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.981 16:01:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.981 16:01:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.981 16:01:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.981 16:01:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.981 [2024-12-12 16:01:51.172599] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:24.981 [2024-12-12 16:01:51.172755] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60206 ] 00:05:25.240 [2024-12-12 16:01:51.371257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.240 [2024-12-12 16:01:51.495963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.240 [2024-12-12 16:01:51.496130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.240 [2024-12-12 16:01:51.496266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.240 [2024-12-12 16:01:51.496298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.810 16:01:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.810 16:01:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:25.810 16:01:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.810 16:01:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.810 16:01:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.810 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.810 POWER: Cannot set governor of lcore 0 to performance 00:05:25.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.810 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.810 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.810 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:25.810 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:25.810 POWER: Unable to set Power Management Environment for lcore 0 00:05:25.810 [2024-12-12 16:01:52.009315] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:25.810 [2024-12-12 16:01:52.009375] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:25.810 [2024-12-12 16:01:52.009409] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.810 [2024-12-12 16:01:52.009453] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.810 [2024-12-12 16:01:52.009490] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.810 [2024-12-12 16:01:52.009522] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.810 16:01:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.810 16:01:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.810 16:01:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.810 16:01:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.070 [2024-12-12 16:01:52.336067] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:26.070 16:01:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.070 16:01:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:26.070 16:01:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.070 16:01:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.070 16:01:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.070 ************************************ 00:05:26.070 START TEST scheduler_create_thread 00:05:26.070 ************************************ 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.070 2 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.070 3 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.070 4 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.070 5 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.070 6 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.070 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.330 7 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.330 8 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.330 9 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.330 10 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.330 16:01:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.773 16:01:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.773 16:01:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:27.773 16:01:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:27.773 16:01:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.774 16:01:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.342 16:01:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.343 16:01:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:28.343 16:01:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.343 16:01:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.286 16:01:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.286 16:01:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:29.286 16:01:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:29.286 16:01:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.286 16:01:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.226 16:01:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.226 00:05:30.226 real 0m3.883s 00:05:30.226 user 0m0.030s 00:05:30.226 sys 0m0.009s 00:05:30.226 16:01:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.226 16:01:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.226 ************************************ 00:05:30.226 END TEST scheduler_create_thread 00:05:30.226 ************************************ 00:05:30.226 16:01:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:30.226 16:01:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60206 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60206 ']' 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60206 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60206 00:05:30.226 killing process with pid 60206 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60206' 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60206 00:05:30.226 16:01:56 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60206 00:05:30.485 [2024-12-12 16:01:56.614501] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:31.867 00:05:31.867 real 0m6.955s 00:05:31.867 user 0m14.241s 00:05:31.867 sys 0m0.522s 00:05:31.867 16:01:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.867 ************************************ 00:05:31.867 END TEST event_scheduler 00:05:31.867 ************************************ 00:05:31.867 16:01:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.867 16:01:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:31.867 16:01:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:31.867 16:01:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.867 16:01:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.867 16:01:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.867 ************************************ 00:05:31.867 START TEST app_repeat 00:05:31.867 ************************************ 00:05:31.867 16:01:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60324 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60324' 00:05:31.867 Process app_repeat pid: 60324 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:31.867 spdk_app_start Round 0 00:05:31.867 16:01:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60324 /var/tmp/spdk-nbd.sock 00:05:31.867 16:01:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60324 ']' 00:05:31.867 16:01:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.867 16:01:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.867 16:01:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.867 16:01:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.867 16:01:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.867 [2024-12-12 16:01:57.938341] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:31.867 [2024-12-12 16:01:57.938557] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60324 ] 00:05:31.867 [2024-12-12 16:01:58.114421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.127 [2024-12-12 16:01:58.256279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.127 [2024-12-12 16:01:58.256320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.697 16:01:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.697 16:01:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.697 16:01:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.957 Malloc0 00:05:32.957 16:01:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.217 Malloc1 00:05:33.217 16:01:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.217 16:01:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.477 /dev/nbd0 00:05:33.477 16:01:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.477 16:01:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.477 1+0 records in 00:05:33.477 1+0 records out 00:05:33.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211295 s, 19.4 MB/s 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.477 16:01:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.477 16:01:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.477 16:01:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.477 16:01:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.737 /dev/nbd1 00:05:33.737 16:01:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.737 16:01:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.737 1+0 records in 00:05:33.737 1+0 records out 00:05:33.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356812 s, 11.5 MB/s 00:05:33.737 16:01:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.738 16:01:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.738 16:01:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.738 16:01:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.738 16:01:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.738 16:01:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.738 16:01:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.738 16:01:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.738 16:01:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.738 16:01:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.998 { 00:05:33.998 "nbd_device": "/dev/nbd0", 00:05:33.998 "bdev_name": "Malloc0" 00:05:33.998 }, 00:05:33.998 { 00:05:33.998 "nbd_device": "/dev/nbd1", 00:05:33.998 "bdev_name": "Malloc1" 00:05:33.998 } 00:05:33.998 ]' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.998 { 00:05:33.998 "nbd_device": "/dev/nbd0", 00:05:33.998 "bdev_name": "Malloc0" 00:05:33.998 }, 00:05:33.998 { 00:05:33.998 "nbd_device": "/dev/nbd1", 00:05:33.998 "bdev_name": "Malloc1" 00:05:33.998 } 00:05:33.998 ]' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.998 /dev/nbd1' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.998 /dev/nbd1' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.998 256+0 records in 00:05:33.998 256+0 records out 00:05:33.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00793677 s, 132 MB/s 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.998 256+0 records in 00:05:33.998 256+0 records out 00:05:33.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286019 s, 36.7 MB/s 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.998 256+0 records in 00:05:33.998 256+0 records out 00:05:33.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254511 s, 41.2 MB/s 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.998 16:02:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.258 16:02:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.517 16:02:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.778 16:02:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.778 16:02:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.778 16:02:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.349 16:02:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.727 [2024-12-12 16:02:02.754815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.727 [2024-12-12 16:02:02.898085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.727 [2024-12-12 16:02:02.898094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.986 [2024-12-12 16:02:03.137415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.986 [2024-12-12 16:02:03.137527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.365 spdk_app_start Round 1 00:05:38.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.365 16:02:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.365 16:02:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:38.365 16:02:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60324 /var/tmp/spdk-nbd.sock 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60324 ']' 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.365 16:02:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.365 16:02:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.624 Malloc0 00:05:38.624 16:02:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.893 Malloc1 00:05:38.893 16:02:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.893 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.153 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.153 16:02:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.153 /dev/nbd0 00:05:39.153 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.153 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.153 1+0 records in 00:05:39.153 1+0 records out 00:05:39.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336181 s, 12.2 MB/s 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.153 16:02:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.413 /dev/nbd1 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.413 1+0 records in 00:05:39.413 1+0 records out 00:05:39.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499643 s, 8.2 MB/s 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.413 16:02:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.413 16:02:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.672 16:02:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.672 { 00:05:39.672 "nbd_device": "/dev/nbd0", 00:05:39.672 "bdev_name": "Malloc0" 00:05:39.672 }, 00:05:39.672 { 00:05:39.672 "nbd_device": "/dev/nbd1", 00:05:39.672 "bdev_name": "Malloc1" 00:05:39.672 } 00:05:39.672 ]' 00:05:39.672 16:02:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.672 { 00:05:39.672 "nbd_device": "/dev/nbd0", 00:05:39.672 "bdev_name": "Malloc0" 00:05:39.672 }, 00:05:39.672 { 00:05:39.672 "nbd_device": "/dev/nbd1", 00:05:39.672 "bdev_name": "Malloc1" 00:05:39.672 } 00:05:39.673 ]' 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.673 /dev/nbd1' 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.673 /dev/nbd1' 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.673 16:02:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.673 256+0 records in 00:05:39.673 256+0 records out 00:05:39.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470239 s, 223 MB/s 00:05:39.673 16:02:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.673 16:02:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.933 256+0 records in 00:05:39.933 256+0 records out 00:05:39.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221805 s, 47.3 MB/s 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.933 256+0 records in 00:05:39.933 256+0 records out 00:05:39.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266915 s, 39.3 MB/s 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.933 16:02:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.193 16:02:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.452 16:02:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.452 16:02:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.020 16:02:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.400 [2024-12-12 16:02:08.476500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.400 [2024-12-12 16:02:08.612691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.400 [2024-12-12 16:02:08.612719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.659 [2024-12-12 16:02:08.844933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.659 [2024-12-12 16:02:08.845071] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.039 spdk_app_start Round 2 00:05:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.039 16:02:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.039 16:02:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:44.039 16:02:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60324 /var/tmp/spdk-nbd.sock 00:05:44.039 16:02:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60324 ']' 00:05:44.039 16:02:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.039 16:02:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.039 16:02:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.039 16:02:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.039 16:02:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.299 16:02:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.299 16:02:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.299 16:02:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.558 Malloc0 00:05:44.558 16:02:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.817 Malloc1 00:05:44.817 16:02:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.817 16:02:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.818 16:02:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.818 16:02:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.078 /dev/nbd0 00:05:45.078 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.078 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.078 1+0 records in 00:05:45.078 1+0 records out 00:05:45.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317749 s, 12.9 MB/s 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.078 16:02:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.078 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.078 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.078 16:02:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.078 /dev/nbd1 00:05:45.337 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.337 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.337 1+0 records in 00:05:45.337 1+0 records out 00:05:45.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340533 s, 12.0 MB/s 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.337 16:02:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.337 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.337 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.337 16:02:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.338 16:02:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.338 16:02:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.338 16:02:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.338 { 00:05:45.338 "nbd_device": "/dev/nbd0", 00:05:45.338 "bdev_name": "Malloc0" 00:05:45.338 }, 00:05:45.338 { 00:05:45.338 "nbd_device": "/dev/nbd1", 00:05:45.338 "bdev_name": "Malloc1" 00:05:45.338 } 00:05:45.338 ]' 00:05:45.338 16:02:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.338 { 00:05:45.338 "nbd_device": "/dev/nbd0", 00:05:45.338 "bdev_name": "Malloc0" 00:05:45.338 }, 00:05:45.338 { 00:05:45.338 "nbd_device": "/dev/nbd1", 00:05:45.338 "bdev_name": "Malloc1" 00:05:45.338 } 00:05:45.338 ]' 00:05:45.338 16:02:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.597 /dev/nbd1' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.597 /dev/nbd1' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.597 256+0 records in 00:05:45.597 256+0 records out 00:05:45.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582608 s, 180 MB/s 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.597 256+0 records in 00:05:45.597 256+0 records out 00:05:45.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025244 s, 41.5 MB/s 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.597 256+0 records in 00:05:45.597 256+0 records out 00:05:45.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289112 s, 36.3 MB/s 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.597 16:02:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.857 16:02:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.116 16:02:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.376 16:02:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.376 16:02:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.376 16:02:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.376 16:02:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.376 16:02:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.377 16:02:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.377 16:02:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.377 16:02:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.377 16:02:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.377 16:02:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.377 16:02:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.377 16:02:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.377 16:02:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.944 16:02:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.323 [2024-12-12 16:02:14.325887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.323 [2024-12-12 16:02:14.462843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.323 [2024-12-12 16:02:14.462844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.582 [2024-12-12 16:02:14.689387] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.582 [2024-12-12 16:02:14.689537] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.965 16:02:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60324 /var/tmp/spdk-nbd.sock 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60324 ']' 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.965 16:02:16 event.app_repeat -- event/event.sh@39 -- # killprocess 60324 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60324 ']' 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60324 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60324 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60324' 00:05:49.965 killing process with pid 60324 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60324 00:05:49.965 16:02:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60324 00:05:51.339 spdk_app_start is called in Round 0. 00:05:51.339 Shutdown signal received, stop current app iteration 00:05:51.339 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:51.339 spdk_app_start is called in Round 1. 00:05:51.339 Shutdown signal received, stop current app iteration 00:05:51.339 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:51.339 spdk_app_start is called in Round 2. 00:05:51.339 Shutdown signal received, stop current app iteration 00:05:51.339 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:51.339 spdk_app_start is called in Round 3. 00:05:51.339 Shutdown signal received, stop current app iteration 00:05:51.339 16:02:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.339 16:02:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.339 00:05:51.339 real 0m19.572s 00:05:51.339 user 0m41.273s 00:05:51.339 sys 0m3.124s 00:05:51.339 16:02:17 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.339 16:02:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.339 ************************************ 00:05:51.339 END TEST app_repeat 00:05:51.339 ************************************ 00:05:51.339 16:02:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.339 16:02:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.339 16:02:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.339 16:02:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.339 16:02:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.339 ************************************ 00:05:51.339 START TEST cpu_locks 00:05:51.339 ************************************ 00:05:51.339 16:02:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.340 * Looking for test storage... 00:05:51.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.340 16:02:17 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.340 16:02:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.340 16:02:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.598 16:02:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.598 16:02:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.598 16:02:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.598 16:02:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.598 16:02:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.598 16:02:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.599 16:02:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.599 --rc genhtml_branch_coverage=1 00:05:51.599 --rc genhtml_function_coverage=1 00:05:51.599 --rc genhtml_legend=1 00:05:51.599 --rc geninfo_all_blocks=1 00:05:51.599 --rc geninfo_unexecuted_blocks=1 00:05:51.599 00:05:51.599 ' 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.599 --rc genhtml_branch_coverage=1 00:05:51.599 --rc genhtml_function_coverage=1 00:05:51.599 --rc genhtml_legend=1 00:05:51.599 --rc geninfo_all_blocks=1 00:05:51.599 --rc geninfo_unexecuted_blocks=1 00:05:51.599 00:05:51.599 ' 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.599 --rc genhtml_branch_coverage=1 00:05:51.599 --rc genhtml_function_coverage=1 00:05:51.599 --rc genhtml_legend=1 00:05:51.599 --rc geninfo_all_blocks=1 00:05:51.599 --rc geninfo_unexecuted_blocks=1 00:05:51.599 00:05:51.599 ' 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.599 --rc genhtml_branch_coverage=1 00:05:51.599 --rc genhtml_function_coverage=1 00:05:51.599 --rc genhtml_legend=1 00:05:51.599 --rc geninfo_all_blocks=1 00:05:51.599 --rc geninfo_unexecuted_blocks=1 00:05:51.599 00:05:51.599 ' 00:05:51.599 16:02:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.599 16:02:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.599 16:02:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.599 16:02:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.599 16:02:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.599 ************************************ 00:05:51.599 START TEST default_locks 00:05:51.599 ************************************ 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60773 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60773 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60773 ']' 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.599 16:02:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.599 [2024-12-12 16:02:17.847760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:51.599 [2024-12-12 16:02:17.848030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60773 ] 00:05:51.858 [2024-12-12 16:02:18.028281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.858 [2024-12-12 16:02:18.173879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60773 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60773 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60773 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60773 ']' 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60773 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.233 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60773 00:05:53.492 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.492 killing process with pid 60773 00:05:53.492 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.492 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60773' 00:05:53.492 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60773 00:05:53.492 16:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60773 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60773 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60773 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60773 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60773 ']' 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60773) - No such process 00:05:56.025 ERROR: process (pid: 60773) is no longer running 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.025 00:05:56.025 real 0m4.618s 00:05:56.025 user 0m4.357s 00:05:56.025 sys 0m0.834s 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.025 16:02:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.025 ************************************ 00:05:56.025 END TEST default_locks 00:05:56.025 ************************************ 00:05:56.284 16:02:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.284 16:02:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.284 16:02:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.284 16:02:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.284 ************************************ 00:05:56.284 START TEST default_locks_via_rpc 00:05:56.284 ************************************ 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60854 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60854 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60854 ']' 00:05:56.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.284 16:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.284 [2024-12-12 16:02:22.531030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:56.284 [2024-12-12 16:02:22.531280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60854 ] 00:05:56.543 [2024-12-12 16:02:22.712074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.543 [2024-12-12 16:02:22.860185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60854 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60854 00:05:57.919 16:02:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60854 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60854 ']' 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60854 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60854 00:05:58.178 killing process with pid 60854 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60854' 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60854 00:05:58.178 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60854 00:06:00.728 ************************************ 00:06:00.728 END TEST default_locks_via_rpc 00:06:00.728 ************************************ 00:06:00.728 00:06:00.728 real 0m4.549s 00:06:00.728 user 0m4.269s 00:06:00.728 sys 0m0.832s 00:06:00.728 16:02:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.728 16:02:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.728 16:02:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:00.728 16:02:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.728 16:02:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.729 16:02:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.729 ************************************ 00:06:00.729 START TEST non_locking_app_on_locked_coremask 00:06:00.729 ************************************ 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60939 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60939 /var/tmp/spdk.sock 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60939 ']' 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.729 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.987 [2024-12-12 16:02:27.119271] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:00.987 [2024-12-12 16:02:27.119381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60939 ] 00:06:00.987 [2024-12-12 16:02:27.294816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.248 [2024-12-12 16:02:27.438537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60957 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60957 /var/tmp/spdk2.sock 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60957 ']' 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.187 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.447 [2024-12-12 16:02:28.611926] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:02.447 [2024-12-12 16:02:28.612189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:06:02.447 [2024-12-12 16:02:28.792992] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.447 [2024-12-12 16:02:28.793089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.016 [2024-12-12 16:02:29.084090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.924 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.924 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.924 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60939 00:06:04.924 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.924 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60939 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60939 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60939 ']' 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60939 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60939 00:06:05.494 killing process with pid 60939 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60939' 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60939 00:06:05.494 16:02:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60939 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60957 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60957 ']' 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60957 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60957 00:06:10.805 killing process with pid 60957 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60957' 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60957 00:06:10.805 16:02:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60957 00:06:14.097 ************************************ 00:06:14.097 END TEST non_locking_app_on_locked_coremask 00:06:14.097 ************************************ 00:06:14.097 00:06:14.097 real 0m12.739s 00:06:14.097 user 0m12.735s 00:06:14.097 sys 0m1.546s 00:06:14.097 16:02:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.097 16:02:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.097 16:02:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.097 16:02:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.097 16:02:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.097 16:02:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.097 ************************************ 00:06:14.097 START TEST locking_app_on_unlocked_coremask 00:06:14.097 ************************************ 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61117 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61117 /var/tmp/spdk.sock 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61117 ']' 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.097 16:02:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.097 [2024-12-12 16:02:39.944190] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:14.097 [2024-12-12 16:02:39.944427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:06:14.097 [2024-12-12 16:02:40.113620] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.097 [2024-12-12 16:02:40.113784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.097 [2024-12-12 16:02:40.258515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61133 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61133 /var/tmp/spdk2.sock 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61133 ']' 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.036 16:02:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.036 [2024-12-12 16:02:41.384859] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:15.036 [2024-12-12 16:02:41.385115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:06:15.294 [2024-12-12 16:02:41.566965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.553 [2024-12-12 16:02:41.853678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.097 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.097 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.097 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61133 00:06:18.097 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61133 00:06:18.097 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61117 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61117 ']' 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61117 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61117 00:06:18.357 killing process with pid 61117 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61117' 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61117 00:06:18.357 16:02:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61117 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61133 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61133 ']' 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61133 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61133 00:06:23.639 killing process with pid 61133 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61133' 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61133 00:06:23.639 16:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61133 00:06:26.932 00:06:26.932 real 0m12.781s 00:06:26.932 user 0m12.747s 00:06:26.932 sys 0m1.641s 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.932 ************************************ 00:06:26.932 END TEST locking_app_on_unlocked_coremask 00:06:26.932 ************************************ 00:06:26.932 16:02:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.932 16:02:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.932 16:02:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.932 16:02:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.932 ************************************ 00:06:26.932 START TEST locking_app_on_locked_coremask 00:06:26.932 ************************************ 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61292 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61292 /var/tmp/spdk.sock 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61292 ']' 00:06:26.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.932 16:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.932 [2024-12-12 16:02:52.782813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:26.932 [2024-12-12 16:02:52.783089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61292 ] 00:06:26.932 [2024-12-12 16:02:52.964535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.932 [2024-12-12 16:02:53.110665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61314 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61314 /var/tmp/spdk2.sock 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61314 /var/tmp/spdk2.sock 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61314 /var/tmp/spdk2.sock 00:06:27.870 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61314 ']' 00:06:27.871 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.871 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.871 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.871 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.871 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.129 [2024-12-12 16:02:54.250491] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:28.129 [2024-12-12 16:02:54.250732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:06:28.129 [2024-12-12 16:02:54.426354] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61292 has claimed it. 00:06:28.129 [2024-12-12 16:02:54.426445] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.697 ERROR: process (pid: 61314) is no longer running 00:06:28.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61314) - No such process 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61292 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61292 00:06:28.697 16:02:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61292 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61292 ']' 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61292 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61292 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61292' 00:06:28.956 killing process with pid 61292 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61292 00:06:28.956 16:02:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61292 00:06:32.248 00:06:32.248 real 0m5.195s 00:06:32.248 user 0m5.160s 00:06:32.248 sys 0m0.988s 00:06:32.248 16:02:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.248 ************************************ 00:06:32.248 END TEST locking_app_on_locked_coremask 00:06:32.248 ************************************ 00:06:32.248 16:02:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.248 16:02:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:32.248 16:02:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.248 16:02:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.248 16:02:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.248 ************************************ 00:06:32.248 START TEST locking_overlapped_coremask 00:06:32.248 ************************************ 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61389 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61389 /var/tmp/spdk.sock 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61389 ']' 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.248 16:02:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.248 [2024-12-12 16:02:58.042350] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:32.248 [2024-12-12 16:02:58.042611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61389 ] 00:06:32.248 [2024-12-12 16:02:58.218989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.248 [2024-12-12 16:02:58.364462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.248 [2024-12-12 16:02:58.364603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.248 [2024-12-12 16:02:58.364644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61407 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61407 /var/tmp/spdk2.sock 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61407 /var/tmp/spdk2.sock 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61407 /var/tmp/spdk2.sock 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61407 ']' 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.189 16:02:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.189 [2024-12-12 16:02:59.539582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:33.189 [2024-12-12 16:02:59.539868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61407 ] 00:06:33.449 [2024-12-12 16:02:59.718850] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61389 has claimed it. 00:06:33.450 [2024-12-12 16:02:59.718947] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.026 ERROR: process (pid: 61407) is no longer running 00:06:34.026 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61407) - No such process 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61389 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61389 ']' 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61389 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61389 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61389' 00:06:34.026 killing process with pid 61389 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61389 00:06:34.026 16:03:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61389 00:06:36.567 00:06:36.567 real 0m4.967s 00:06:36.567 user 0m13.302s 00:06:36.567 sys 0m0.835s 00:06:36.567 16:03:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.567 16:03:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.567 ************************************ 00:06:36.567 END TEST locking_overlapped_coremask 00:06:36.567 ************************************ 00:06:36.827 16:03:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.827 16:03:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.827 16:03:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.827 16:03:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.827 ************************************ 00:06:36.827 START TEST locking_overlapped_coremask_via_rpc 00:06:36.827 ************************************ 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61471 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61471 /var/tmp/spdk.sock 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61471 ']' 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.827 16:03:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.827 [2024-12-12 16:03:03.079946] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:36.827 [2024-12-12 16:03:03.080104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61471 ] 00:06:37.087 [2024-12-12 16:03:03.264244] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.087 [2024-12-12 16:03:03.264309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.087 [2024-12-12 16:03:03.408812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.087 [2024-12-12 16:03:03.409022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.087 [2024-12-12 16:03:03.409082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61500 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61500 /var/tmp/spdk2.sock 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61500 ']' 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.480 16:03:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.480 [2024-12-12 16:03:04.592825] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:38.480 [2024-12-12 16:03:04.593110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61500 ] 00:06:38.480 [2024-12-12 16:03:04.773056] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.480 [2024-12-12 16:03:04.773138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.740 [2024-12-12 16:03:05.018364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.740 [2024-12-12 16:03:05.018524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.740 [2024-12-12 16:03:05.018563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.277 [2024-12-12 16:03:07.186211] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61471 has claimed it. 00:06:41.277 request: 00:06:41.277 { 00:06:41.277 "method": "framework_enable_cpumask_locks", 00:06:41.277 "req_id": 1 00:06:41.277 } 00:06:41.277 Got JSON-RPC error response 00:06:41.277 response: 00:06:41.277 { 00:06:41.277 "code": -32603, 00:06:41.277 "message": "Failed to claim CPU core: 2" 00:06:41.277 } 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61471 /var/tmp/spdk.sock 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61471 ']' 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61500 /var/tmp/spdk2.sock 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61500 ']' 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.277 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.537 00:06:41.537 real 0m4.711s 00:06:41.537 user 0m1.326s 00:06:41.537 sys 0m0.242s 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.537 16:03:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.537 ************************************ 00:06:41.537 END TEST locking_overlapped_coremask_via_rpc 00:06:41.537 ************************************ 00:06:41.537 16:03:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.537 16:03:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61471 ]] 00:06:41.537 16:03:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61471 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61471 ']' 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61471 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61471 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61471' 00:06:41.537 killing process with pid 61471 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61471 00:06:41.537 16:03:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61471 00:06:44.868 16:03:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61500 ]] 00:06:44.868 16:03:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61500 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61500 ']' 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61500 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61500 00:06:44.868 killing process with pid 61500 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61500' 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61500 00:06:44.868 16:03:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61500 00:06:46.777 16:03:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.777 16:03:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:46.777 16:03:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61471 ]] 00:06:46.777 16:03:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61471 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61471 ']' 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61471 00:06:46.777 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61471) - No such process 00:06:46.777 Process with pid 61471 is not found 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61471 is not found' 00:06:46.777 16:03:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61500 ]] 00:06:46.777 16:03:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61500 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61500 ']' 00:06:46.777 Process with pid 61500 is not found 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61500 00:06:46.777 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61500) - No such process 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61500 is not found' 00:06:46.777 16:03:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.777 00:06:46.777 real 0m55.575s 00:06:46.777 user 1m32.773s 00:06:46.777 sys 0m8.337s 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.777 ************************************ 00:06:46.777 END TEST cpu_locks 00:06:46.777 ************************************ 00:06:46.777 16:03:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.036 ************************************ 00:06:47.036 END TEST event 00:06:47.036 ************************************ 00:06:47.036 00:06:47.036 real 1m27.603s 00:06:47.036 user 2m35.723s 00:06:47.036 sys 0m12.716s 00:06:47.036 16:03:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.036 16:03:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.036 16:03:13 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.036 16:03:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.036 16:03:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.036 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:06:47.036 ************************************ 00:06:47.036 START TEST thread 00:06:47.036 ************************************ 00:06:47.036 16:03:13 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.036 * Looking for test storage... 00:06:47.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:47.036 16:03:13 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.036 16:03:13 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.036 16:03:13 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.296 16:03:13 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.296 16:03:13 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.296 16:03:13 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.296 16:03:13 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.296 16:03:13 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.296 16:03:13 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.296 16:03:13 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.296 16:03:13 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.296 16:03:13 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.296 16:03:13 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.296 16:03:13 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.296 16:03:13 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.296 16:03:13 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:47.296 16:03:13 thread -- scripts/common.sh@345 -- # : 1 00:06:47.296 16:03:13 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.296 16:03:13 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.296 16:03:13 thread -- scripts/common.sh@365 -- # decimal 1 00:06:47.296 16:03:13 thread -- scripts/common.sh@353 -- # local d=1 00:06:47.296 16:03:13 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.296 16:03:13 thread -- scripts/common.sh@355 -- # echo 1 00:06:47.296 16:03:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.296 16:03:13 thread -- scripts/common.sh@366 -- # decimal 2 00:06:47.296 16:03:13 thread -- scripts/common.sh@353 -- # local d=2 00:06:47.296 16:03:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.296 16:03:13 thread -- scripts/common.sh@355 -- # echo 2 00:06:47.296 16:03:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.296 16:03:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.296 16:03:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.296 16:03:13 thread -- scripts/common.sh@368 -- # return 0 00:06:47.296 16:03:13 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.296 16:03:13 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.296 --rc genhtml_branch_coverage=1 00:06:47.296 --rc genhtml_function_coverage=1 00:06:47.296 --rc genhtml_legend=1 00:06:47.296 --rc geninfo_all_blocks=1 00:06:47.296 --rc geninfo_unexecuted_blocks=1 00:06:47.296 00:06:47.296 ' 00:06:47.296 16:03:13 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.296 --rc genhtml_branch_coverage=1 00:06:47.296 --rc genhtml_function_coverage=1 00:06:47.296 --rc genhtml_legend=1 00:06:47.296 --rc geninfo_all_blocks=1 00:06:47.296 --rc geninfo_unexecuted_blocks=1 00:06:47.297 00:06:47.297 ' 00:06:47.297 16:03:13 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.297 --rc genhtml_branch_coverage=1 00:06:47.297 --rc genhtml_function_coverage=1 00:06:47.297 --rc genhtml_legend=1 00:06:47.297 --rc geninfo_all_blocks=1 00:06:47.297 --rc geninfo_unexecuted_blocks=1 00:06:47.297 00:06:47.297 ' 00:06:47.297 16:03:13 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.297 --rc genhtml_branch_coverage=1 00:06:47.297 --rc genhtml_function_coverage=1 00:06:47.297 --rc genhtml_legend=1 00:06:47.297 --rc geninfo_all_blocks=1 00:06:47.297 --rc geninfo_unexecuted_blocks=1 00:06:47.297 00:06:47.297 ' 00:06:47.297 16:03:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.297 16:03:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:47.297 16:03:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.297 16:03:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.297 ************************************ 00:06:47.297 START TEST thread_poller_perf 00:06:47.297 ************************************ 00:06:47.297 16:03:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.297 [2024-12-12 16:03:13.494537] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:47.297 [2024-12-12 16:03:13.494664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61704 ] 00:06:47.557 [2024-12-12 16:03:13.674632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.557 [2024-12-12 16:03:13.815903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.557 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:48.935 [2024-12-12T16:03:15.287Z] ====================================== 00:06:48.935 [2024-12-12T16:03:15.287Z] busy:2297891732 (cyc) 00:06:48.935 [2024-12-12T16:03:15.287Z] total_run_count: 389000 00:06:48.935 [2024-12-12T16:03:15.287Z] tsc_hz: 2290000000 (cyc) 00:06:48.935 [2024-12-12T16:03:15.287Z] ====================================== 00:06:48.935 [2024-12-12T16:03:15.287Z] poller_cost: 5907 (cyc), 2579 (nsec) 00:06:48.935 00:06:48.935 real 0m1.628s 00:06:48.935 user 0m1.396s 00:06:48.935 sys 0m0.124s 00:06:48.935 16:03:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.935 16:03:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.935 ************************************ 00:06:48.935 END TEST thread_poller_perf 00:06:48.935 ************************************ 00:06:48.935 16:03:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.935 16:03:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:48.935 16:03:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.935 16:03:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.936 ************************************ 00:06:48.936 START TEST thread_poller_perf 00:06:48.936 ************************************ 00:06:48.936 16:03:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.936 [2024-12-12 16:03:15.189919] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:48.936 [2024-12-12 16:03:15.190113] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61740 ] 00:06:49.195 [2024-12-12 16:03:15.365983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.195 [2024-12-12 16:03:15.514541] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.195 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:50.574 [2024-12-12T16:03:16.926Z] ====================================== 00:06:50.574 [2024-12-12T16:03:16.926Z] busy:2293779230 (cyc) 00:06:50.574 [2024-12-12T16:03:16.926Z] total_run_count: 4709000 00:06:50.574 [2024-12-12T16:03:16.926Z] tsc_hz: 2290000000 (cyc) 00:06:50.574 [2024-12-12T16:03:16.926Z] ====================================== 00:06:50.574 [2024-12-12T16:03:16.926Z] poller_cost: 487 (cyc), 212 (nsec) 00:06:50.574 00:06:50.574 real 0m1.621s 00:06:50.574 user 0m1.396s 00:06:50.574 sys 0m0.117s 00:06:50.574 16:03:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.574 16:03:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.574 ************************************ 00:06:50.574 END TEST thread_poller_perf 00:06:50.574 ************************************ 00:06:50.574 16:03:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:50.574 00:06:50.574 real 0m3.612s 00:06:50.574 user 0m2.969s 00:06:50.574 sys 0m0.440s 00:06:50.574 16:03:16 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.574 16:03:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.574 ************************************ 00:06:50.574 END TEST thread 00:06:50.574 ************************************ 00:06:50.574 16:03:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:50.574 16:03:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:50.574 16:03:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.574 16:03:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.574 16:03:16 -- common/autotest_common.sh@10 -- # set +x 00:06:50.574 ************************************ 00:06:50.574 START TEST app_cmdline 00:06:50.574 ************************************ 00:06:50.574 16:03:16 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:50.834 * Looking for test storage... 00:06:50.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:50.834 16:03:17 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:50.834 16:03:17 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:50.834 16:03:17 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:50.834 16:03:17 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:50.834 16:03:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:50.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.835 16:03:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.835 --rc genhtml_branch_coverage=1 00:06:50.835 --rc genhtml_function_coverage=1 00:06:50.835 --rc genhtml_legend=1 00:06:50.835 --rc geninfo_all_blocks=1 00:06:50.835 --rc geninfo_unexecuted_blocks=1 00:06:50.835 00:06:50.835 ' 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.835 --rc genhtml_branch_coverage=1 00:06:50.835 --rc genhtml_function_coverage=1 00:06:50.835 --rc genhtml_legend=1 00:06:50.835 --rc geninfo_all_blocks=1 00:06:50.835 --rc geninfo_unexecuted_blocks=1 00:06:50.835 00:06:50.835 ' 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.835 --rc genhtml_branch_coverage=1 00:06:50.835 --rc genhtml_function_coverage=1 00:06:50.835 --rc genhtml_legend=1 00:06:50.835 --rc geninfo_all_blocks=1 00:06:50.835 --rc geninfo_unexecuted_blocks=1 00:06:50.835 00:06:50.835 ' 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.835 --rc genhtml_branch_coverage=1 00:06:50.835 --rc genhtml_function_coverage=1 00:06:50.835 --rc genhtml_legend=1 00:06:50.835 --rc geninfo_all_blocks=1 00:06:50.835 --rc geninfo_unexecuted_blocks=1 00:06:50.835 00:06:50.835 ' 00:06:50.835 16:03:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:50.835 16:03:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61829 00:06:50.835 16:03:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:50.835 16:03:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61829 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61829 ']' 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.835 16:03:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.094 [2024-12-12 16:03:17.189101] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:51.094 [2024-12-12 16:03:17.189299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61829 ] 00:06:51.094 [2024-12-12 16:03:17.363273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.358 [2024-12-12 16:03:17.515976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.301 16:03:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.302 16:03:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:52.302 16:03:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:52.561 { 00:06:52.561 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:52.561 "fields": { 00:06:52.561 "major": 25, 00:06:52.561 "minor": 1, 00:06:52.561 "patch": 0, 00:06:52.561 "suffix": "-pre", 00:06:52.561 "commit": "e01cb43b8" 00:06:52.561 } 00:06:52.561 } 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:52.561 16:03:18 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:52.561 16:03:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.561 16:03:18 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:52.561 16:03:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.561 16:03:18 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:52.562 16:03:18 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.821 request: 00:06:52.821 { 00:06:52.821 "method": "env_dpdk_get_mem_stats", 00:06:52.821 "req_id": 1 00:06:52.821 } 00:06:52.821 Got JSON-RPC error response 00:06:52.821 response: 00:06:52.821 { 00:06:52.821 "code": -32601, 00:06:52.821 "message": "Method not found" 00:06:52.821 } 00:06:52.821 16:03:18 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:52.821 16:03:18 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.821 16:03:18 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.821 16:03:18 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.821 16:03:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61829 00:06:52.821 16:03:18 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61829 ']' 00:06:52.821 16:03:18 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61829 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61829 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.821 killing process with pid 61829 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61829' 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 61829 00:06:52.821 16:03:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 61829 00:06:56.106 00:06:56.106 real 0m4.879s 00:06:56.106 user 0m4.868s 00:06:56.106 sys 0m0.789s 00:06:56.106 16:03:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.106 ************************************ 00:06:56.106 END TEST app_cmdline 00:06:56.106 ************************************ 00:06:56.106 16:03:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.106 16:03:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:56.106 16:03:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.106 16:03:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.106 16:03:21 -- common/autotest_common.sh@10 -- # set +x 00:06:56.106 ************************************ 00:06:56.106 START TEST version 00:06:56.106 ************************************ 00:06:56.106 16:03:21 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:56.106 * Looking for test storage... 00:06:56.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:56.106 16:03:21 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.106 16:03:21 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.106 16:03:21 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.106 16:03:22 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.106 16:03:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.106 16:03:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.106 16:03:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.106 16:03:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.106 16:03:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.106 16:03:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.106 16:03:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.106 16:03:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.106 16:03:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.106 16:03:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.106 16:03:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.106 16:03:22 version -- scripts/common.sh@344 -- # case "$op" in 00:06:56.106 16:03:22 version -- scripts/common.sh@345 -- # : 1 00:06:56.106 16:03:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.106 16:03:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.106 16:03:22 version -- scripts/common.sh@365 -- # decimal 1 00:06:56.106 16:03:22 version -- scripts/common.sh@353 -- # local d=1 00:06:56.106 16:03:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.106 16:03:22 version -- scripts/common.sh@355 -- # echo 1 00:06:56.106 16:03:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.106 16:03:22 version -- scripts/common.sh@366 -- # decimal 2 00:06:56.106 16:03:22 version -- scripts/common.sh@353 -- # local d=2 00:06:56.106 16:03:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.106 16:03:22 version -- scripts/common.sh@355 -- # echo 2 00:06:56.106 16:03:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.106 16:03:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.106 16:03:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.106 16:03:22 version -- scripts/common.sh@368 -- # return 0 00:06:56.106 16:03:22 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.106 16:03:22 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.106 --rc genhtml_branch_coverage=1 00:06:56.106 --rc genhtml_function_coverage=1 00:06:56.106 --rc genhtml_legend=1 00:06:56.106 --rc geninfo_all_blocks=1 00:06:56.106 --rc geninfo_unexecuted_blocks=1 00:06:56.106 00:06:56.106 ' 00:06:56.106 16:03:22 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.106 --rc genhtml_branch_coverage=1 00:06:56.106 --rc genhtml_function_coverage=1 00:06:56.106 --rc genhtml_legend=1 00:06:56.106 --rc geninfo_all_blocks=1 00:06:56.106 --rc geninfo_unexecuted_blocks=1 00:06:56.106 00:06:56.106 ' 00:06:56.106 16:03:22 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.106 --rc genhtml_branch_coverage=1 00:06:56.106 --rc genhtml_function_coverage=1 00:06:56.106 --rc genhtml_legend=1 00:06:56.106 --rc geninfo_all_blocks=1 00:06:56.106 --rc geninfo_unexecuted_blocks=1 00:06:56.106 00:06:56.106 ' 00:06:56.106 16:03:22 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.106 --rc genhtml_branch_coverage=1 00:06:56.106 --rc genhtml_function_coverage=1 00:06:56.106 --rc genhtml_legend=1 00:06:56.106 --rc geninfo_all_blocks=1 00:06:56.106 --rc geninfo_unexecuted_blocks=1 00:06:56.106 00:06:56.106 ' 00:06:56.106 16:03:22 version -- app/version.sh@17 -- # get_header_version major 00:06:56.106 16:03:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # cut -f2 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.106 16:03:22 version -- app/version.sh@17 -- # major=25 00:06:56.106 16:03:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:56.106 16:03:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # cut -f2 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.106 16:03:22 version -- app/version.sh@18 -- # minor=1 00:06:56.106 16:03:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:56.106 16:03:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # cut -f2 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.106 16:03:22 version -- app/version.sh@19 -- # patch=0 00:06:56.106 16:03:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:56.106 16:03:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # cut -f2 00:06:56.106 16:03:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.106 16:03:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:56.106 16:03:22 version -- app/version.sh@22 -- # version=25.1 00:06:56.106 16:03:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:56.106 16:03:22 version -- app/version.sh@28 -- # version=25.1rc0 00:06:56.106 16:03:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:56.106 16:03:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:56.106 16:03:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:56.106 16:03:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:56.106 ************************************ 00:06:56.106 END TEST version 00:06:56.106 ************************************ 00:06:56.106 00:06:56.106 real 0m0.319s 00:06:56.106 user 0m0.192s 00:06:56.106 sys 0m0.185s 00:06:56.106 16:03:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.106 16:03:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:56.106 16:03:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:56.106 16:03:22 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:56.106 16:03:22 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:56.106 16:03:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.106 16:03:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.106 16:03:22 -- common/autotest_common.sh@10 -- # set +x 00:06:56.106 ************************************ 00:06:56.106 START TEST bdev_raid 00:06:56.106 ************************************ 00:06:56.106 16:03:22 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:56.106 * Looking for test storage... 00:06:56.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.106 16:03:22 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.106 16:03:22 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.106 16:03:22 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.106 16:03:22 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.106 16:03:22 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.107 16:03:22 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.107 --rc genhtml_branch_coverage=1 00:06:56.107 --rc genhtml_function_coverage=1 00:06:56.107 --rc genhtml_legend=1 00:06:56.107 --rc geninfo_all_blocks=1 00:06:56.107 --rc geninfo_unexecuted_blocks=1 00:06:56.107 00:06:56.107 ' 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.107 --rc genhtml_branch_coverage=1 00:06:56.107 --rc genhtml_function_coverage=1 00:06:56.107 --rc genhtml_legend=1 00:06:56.107 --rc geninfo_all_blocks=1 00:06:56.107 --rc geninfo_unexecuted_blocks=1 00:06:56.107 00:06:56.107 ' 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.107 --rc genhtml_branch_coverage=1 00:06:56.107 --rc genhtml_function_coverage=1 00:06:56.107 --rc genhtml_legend=1 00:06:56.107 --rc geninfo_all_blocks=1 00:06:56.107 --rc geninfo_unexecuted_blocks=1 00:06:56.107 00:06:56.107 ' 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.107 --rc genhtml_branch_coverage=1 00:06:56.107 --rc genhtml_function_coverage=1 00:06:56.107 --rc genhtml_legend=1 00:06:56.107 --rc geninfo_all_blocks=1 00:06:56.107 --rc geninfo_unexecuted_blocks=1 00:06:56.107 00:06:56.107 ' 00:06:56.107 16:03:22 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.107 16:03:22 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.107 16:03:22 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:56.107 16:03:22 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:56.107 16:03:22 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:56.107 16:03:22 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:56.107 16:03:22 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.107 16:03:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.366 ************************************ 00:06:56.366 START TEST raid1_resize_data_offset_test 00:06:56.366 ************************************ 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=62022 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 62022' 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.366 Process raid pid: 62022 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 62022 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 62022 ']' 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.366 16:03:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.366 [2024-12-12 16:03:22.549060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:56.366 [2024-12-12 16:03:22.549259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.625 [2024-12-12 16:03:22.730140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.625 [2024-12-12 16:03:22.870719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.885 [2024-12-12 16:03:23.114178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.885 [2024-12-12 16:03:23.114329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.144 malloc0 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.144 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.403 malloc1 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.403 null0 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.403 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.403 [2024-12-12 16:03:23.584675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:57.403 [2024-12-12 16:03:23.586767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:57.403 [2024-12-12 16:03:23.586881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:57.404 [2024-12-12 16:03:23.587058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:57.404 [2024-12-12 16:03:23.587075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:57.404 [2024-12-12 16:03:23.587342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:57.404 [2024-12-12 16:03:23.587535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:57.404 [2024-12-12 16:03:23.587549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:57.404 [2024-12-12 16:03:23.587702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.404 [2024-12-12 16:03:23.644563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.404 16:03:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.971 malloc2 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.971 [2024-12-12 16:03:24.285779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:57.971 [2024-12-12 16:03:24.306056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.971 [2024-12-12 16:03:24.308345] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:57.971 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 62022 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 62022 ']' 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 62022 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62022 00:06:58.230 killing process with pid 62022 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62022' 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 62022 00:06:58.230 16:03:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 62022 00:06:58.230 [2024-12-12 16:03:24.401894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.230 [2024-12-12 16:03:24.403167] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:58.230 [2024-12-12 16:03:24.403230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.230 [2024-12-12 16:03:24.403249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:58.230 [2024-12-12 16:03:24.442460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.230 [2024-12-12 16:03:24.442911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.230 [2024-12-12 16:03:24.442934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:00.129 [2024-12-12 16:03:26.452216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.598 16:03:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:01.598 00:07:01.598 real 0m5.255s 00:07:01.598 user 0m4.944s 00:07:01.598 sys 0m0.756s 00:07:01.598 16:03:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.598 ************************************ 00:07:01.598 END TEST raid1_resize_data_offset_test 00:07:01.598 ************************************ 00:07:01.598 16:03:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.598 16:03:27 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:01.598 16:03:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.598 16:03:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.598 16:03:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.598 ************************************ 00:07:01.598 START TEST raid0_resize_superblock_test 00:07:01.598 ************************************ 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=62117 00:07:01.598 Process raid pid: 62117 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 62117' 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 62117 00:07:01.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62117 ']' 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.598 16:03:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.598 [2024-12-12 16:03:27.872064] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:01.598 [2024-12-12 16:03:27.872283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.861 [2024-12-12 16:03:28.051127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.861 [2024-12-12 16:03:28.200622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.119 [2024-12-12 16:03:28.446787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.119 [2024-12-12 16:03:28.446946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.377 16:03:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.377 16:03:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:02.377 16:03:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:02.377 16:03:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.377 16:03:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.311 malloc0 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.311 [2024-12-12 16:03:29.393614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:03.311 [2024-12-12 16:03:29.393719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.311 [2024-12-12 16:03:29.393750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:03.311 [2024-12-12 16:03:29.393773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.311 [2024-12-12 16:03:29.396684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.311 [2024-12-12 16:03:29.396791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:03.311 pt0 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.311 bafb036c-dd1c-4a50-94f1-ba4ca899f63f 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.311 7f4e9173-3b9a-4eaf-9dc9-910ef11633ae 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.311 3b5bd8fb-a294-4c35-9b5e-ca99e14b6814 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.311 [2024-12-12 16:03:29.606133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7f4e9173-3b9a-4eaf-9dc9-910ef11633ae is claimed 00:07:03.311 [2024-12-12 16:03:29.606243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3b5bd8fb-a294-4c35-9b5e-ca99e14b6814 is claimed 00:07:03.311 [2024-12-12 16:03:29.606404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:03.311 [2024-12-12 16:03:29.606423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:03.311 [2024-12-12 16:03:29.606747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.311 [2024-12-12 16:03:29.606992] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:03.311 [2024-12-12 16:03:29.607004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:03.311 [2024-12-12 16:03:29.607170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.311 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-12-12 16:03:29.722270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-12-12 16:03:29.754256] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.570 [2024-12-12 16:03:29.754290] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7f4e9173-3b9a-4eaf-9dc9-910ef11633ae' was resized: old size 131072, new size 204800 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-12-12 16:03:29.766066] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.570 [2024-12-12 16:03:29.766096] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3b5bd8fb-a294-4c35-9b5e-ca99e14b6814' was resized: old size 131072, new size 204800 00:07:03.570 [2024-12-12 16:03:29.766131] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.570 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:03.571 [2024-12-12 16:03:29.866060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.571 [2024-12-12 16:03:29.913718] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:03.571 [2024-12-12 16:03:29.913841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:03.571 [2024-12-12 16:03:29.913861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.571 [2024-12-12 16:03:29.913879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:03.571 [2024-12-12 16:03:29.914059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.571 [2024-12-12 16:03:29.914110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.571 [2024-12-12 16:03:29.914126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.571 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.829 [2024-12-12 16:03:29.925528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:03.829 [2024-12-12 16:03:29.925590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.829 [2024-12-12 16:03:29.925615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:03.829 [2024-12-12 16:03:29.925627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.829 [2024-12-12 16:03:29.928295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.829 [2024-12-12 16:03:29.928396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:03.829 [2024-12-12 16:03:29.930339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7f4e9173-3b9a-4eaf-9dc9-910ef11633ae 00:07:03.829 [2024-12-12 16:03:29.930427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7f4e9173-3b9a-4eaf-9dc9-910ef11633ae is claimed 00:07:03.829 [2024-12-12 16:03:29.930542] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3b5bd8fb-a294-4c35-9b5e-ca99e14b6814 00:07:03.829 [2024-12-12 16:03:29.930563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3b5bd8fb-a294-4c35-9b5e-ca99e14b6814 is claimed 00:07:03.829 [2024-12-12 16:03:29.930763] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3b5bd8fb-a294-4c35-9b5e-ca99e14b6814 (2) smaller than existing raid bdev Raid (3) 00:07:03.829 [2024-12-12 16:03:29.930790] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7f4e9173-3b9a-4eaf-9dc9-910ef11633ae: File exists 00:07:03.829 [2024-12-12 16:03:29.930830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:03.829 [2024-12-12 16:03:29.930843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:03.829 pt0 00:07:03.829 [2024-12-12 16:03:29.931158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:03.829 [2024-12-12 16:03:29.931334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:03.829 [2024-12-12 16:03:29.931344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:03.829 [2024-12-12 16:03:29.931507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:03.829 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.830 [2024-12-12 16:03:29.954726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 62117 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62117 ']' 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62117 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.830 16:03:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62117 00:07:03.830 killing process with pid 62117 00:07:03.830 16:03:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.830 16:03:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.830 16:03:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62117' 00:07:03.830 16:03:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 62117 00:07:03.830 [2024-12-12 16:03:30.034080] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.830 [2024-12-12 16:03:30.034196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.830 16:03:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 62117 00:07:03.830 [2024-12-12 16:03:30.034257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.830 [2024-12-12 16:03:30.034267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:05.733 [2024-12-12 16:03:31.744327] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.669 16:03:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:06.669 00:07:06.669 real 0m5.211s 00:07:06.669 user 0m5.211s 00:07:06.669 sys 0m0.787s 00:07:06.669 ************************************ 00:07:06.669 END TEST raid0_resize_superblock_test 00:07:06.669 ************************************ 00:07:06.669 16:03:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.669 16:03:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.928 16:03:33 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:06.928 16:03:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.928 16:03:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.928 16:03:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.928 ************************************ 00:07:06.928 START TEST raid1_resize_superblock_test 00:07:06.928 ************************************ 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=62221 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.928 Process raid pid: 62221 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 62221' 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 62221 00:07:06.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62221 ']' 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.928 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.928 [2024-12-12 16:03:33.153310] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:06.928 [2024-12-12 16:03:33.153551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.188 [2024-12-12 16:03:33.333091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.188 [2024-12-12 16:03:33.469169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.447 [2024-12-12 16:03:33.702864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.447 [2024-12-12 16:03:33.702912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.706 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.706 16:03:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.706 16:03:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:07.706 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.706 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 malloc0 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 [2024-12-12 16:03:34.630951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:08.645 [2024-12-12 16:03:34.631086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.645 [2024-12-12 16:03:34.631115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:08.645 [2024-12-12 16:03:34.631130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.645 [2024-12-12 16:03:34.633644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.645 [2024-12-12 16:03:34.633685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:08.645 pt0 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 50ffcece-d329-4305-9069-31efd3094c61 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 64d7273b-7ca6-4343-8e78-617cf0ffd10b 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 7ae8361f-ed47-4103-84a6-b2457de127c1 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 [2024-12-12 16:03:34.842524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 64d7273b-7ca6-4343-8e78-617cf0ffd10b is claimed 00:07:08.645 [2024-12-12 16:03:34.842625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7ae8361f-ed47-4103-84a6-b2457de127c1 is claimed 00:07:08.645 [2024-12-12 16:03:34.842769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:08.645 [2024-12-12 16:03:34.842786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:08.645 [2024-12-12 16:03:34.843105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:08.645 [2024-12-12 16:03:34.843317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:08.645 [2024-12-12 16:03:34.843335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:08.645 [2024-12-12 16:03:34.843498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:08.645 [2024-12-12 16:03:34.946522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:08.645 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.646 [2024-12-12 16:03:34.986454] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:08.646 [2024-12-12 16:03:34.986520] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '64d7273b-7ca6-4343-8e78-617cf0ffd10b' was resized: old size 131072, new size 204800 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.646 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.646 [2024-12-12 16:03:34.994330] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:08.646 [2024-12-12 16:03:34.994352] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7ae8361f-ed47-4103-84a6-b2457de127c1' was resized: old size 131072, new size 204800 00:07:08.646 [2024-12-12 16:03:34.994379] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:08.908 16:03:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.908 [2024-12-12 16:03:35.082260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.908 [2024-12-12 16:03:35.130038] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:08.908 [2024-12-12 16:03:35.130150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:08.908 [2024-12-12 16:03:35.130194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:08.908 [2024-12-12 16:03:35.130386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.908 [2024-12-12 16:03:35.130620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.908 [2024-12-12 16:03:35.130721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.908 [2024-12-12 16:03:35.130775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.908 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.908 [2024-12-12 16:03:35.137950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:08.908 [2024-12-12 16:03:35.138033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.908 [2024-12-12 16:03:35.138072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:08.908 [2024-12-12 16:03:35.138102] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.908 [2024-12-12 16:03:35.140565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.908 [2024-12-12 16:03:35.140634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:08.908 [2024-12-12 16:03:35.142354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 64d7273b-7ca6-4343-8e78-617cf0ffd10b 00:07:08.908 [2024-12-12 16:03:35.142482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 64d7273b-7ca6-4343-8e78-617cf0ffd10b is claimed 00:07:08.908 [2024-12-12 16:03:35.142646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7ae8361f-ed47-4103-84a6-b2457de127c1 00:07:08.908 [2024-12-12 16:03:35.142707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7ae8361f-ed47-4103-84a6-b2457de127c1 is claimed 00:07:08.908 [2024-12-12 16:03:35.142926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7ae8361f-ed47-4103-84a6-b2457de127c1 (2) smaller than existing raid bdev Raid (3) 00:07:08.908 [2024-12-12 16:03:35.142996] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 64d7273b-7ca6-4343-8e78-617cf0ffd10b: File exists 00:07:08.909 [2024-12-12 16:03:35.143067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:08.909 [2024-12-12 16:03:35.143103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:08.909 pt0 00:07:08.909 [2024-12-12 16:03:35.143393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:08.909 [2024-12-12 16:03:35.143595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:08.909 [2024-12-12 16:03:35.143634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:08.909 [2024-12-12 16:03:35.143814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.909 [2024-12-12 16:03:35.166596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 62221 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62221 ']' 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62221 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62221 00:07:08.909 killing process with pid 62221 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62221' 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 62221 00:07:08.909 [2024-12-12 16:03:35.240671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.909 [2024-12-12 16:03:35.240727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.909 [2024-12-12 16:03:35.240771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.909 [2024-12-12 16:03:35.240780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:08.909 16:03:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 62221 00:07:10.820 [2024-12-12 16:03:36.791899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.758 16:03:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:11.758 00:07:11.758 real 0m4.947s 00:07:11.758 user 0m4.950s 00:07:11.758 sys 0m0.758s 00:07:11.758 16:03:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.758 16:03:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.758 ************************************ 00:07:11.758 END TEST raid1_resize_superblock_test 00:07:11.758 ************************************ 00:07:11.758 16:03:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:11.758 16:03:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:11.758 16:03:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:11.758 16:03:38 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:11.758 16:03:38 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:11.758 16:03:38 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:11.758 16:03:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.758 16:03:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.758 16:03:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.758 ************************************ 00:07:11.758 START TEST raid_function_test_raid0 00:07:11.758 ************************************ 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:11.758 Process raid pid: 62324 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=62324 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 62324' 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 62324 00:07:11.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 62324 ']' 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.758 16:03:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:12.018 [2024-12-12 16:03:38.180044] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:12.018 [2024-12-12 16:03:38.180245] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.018 [2024-12-12 16:03:38.356666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.277 [2024-12-12 16:03:38.490477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.537 [2024-12-12 16:03:38.733158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.537 [2024-12-12 16:03:38.733313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:12.797 Base_1 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:12.797 Base_2 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:12.797 [2024-12-12 16:03:39.115365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:12.797 [2024-12-12 16:03:39.117625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:12.797 [2024-12-12 16:03:39.117700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.797 [2024-12-12 16:03:39.117712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.797 [2024-12-12 16:03:39.118011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.797 [2024-12-12 16:03:39.118177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.797 [2024-12-12 16:03:39.118187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:12.797 [2024-12-12 16:03:39.118344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:12.797 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:13.064 [2024-12-12 16:03:39.343078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:13.064 /dev/nbd0 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.064 1+0 records in 00:07:13.064 1+0 records out 00:07:13.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239019 s, 17.1 MB/s 00:07:13.064 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.341 { 00:07:13.341 "nbd_device": "/dev/nbd0", 00:07:13.341 "bdev_name": "raid" 00:07:13.341 } 00:07:13.341 ]' 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.341 { 00:07:13.341 "nbd_device": "/dev/nbd0", 00:07:13.341 "bdev_name": "raid" 00:07:13.341 } 00:07:13.341 ]' 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:13.341 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:13.612 4096+0 records in 00:07:13.612 4096+0 records out 00:07:13.612 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.02425 s, 86.5 MB/s 00:07:13.612 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:13.612 4096+0 records in 00:07:13.612 4096+0 records out 00:07:13.612 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.242022 s, 8.7 MB/s 00:07:13.612 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:13.872 128+0 records in 00:07:13.872 128+0 records out 00:07:13.872 65536 bytes (66 kB, 64 KiB) copied, 0.000460272 s, 142 MB/s 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.872 16:03:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:13.872 2035+0 records in 00:07:13.872 2035+0 records out 00:07:13.872 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00570902 s, 183 MB/s 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:13.872 456+0 records in 00:07:13.872 456+0 records out 00:07:13.872 233472 bytes (233 kB, 228 KiB) copied, 0.00149373 s, 156 MB/s 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.872 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.132 [2024-12-12 16:03:40.266759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:14.132 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 62324 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 62324 ']' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 62324 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62324 00:07:14.392 killing process with pid 62324 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62324' 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 62324 00:07:14.392 [2024-12-12 16:03:40.604643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.392 [2024-12-12 16:03:40.604768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.392 16:03:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 62324 00:07:14.392 [2024-12-12 16:03:40.604824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.392 [2024-12-12 16:03:40.604841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:14.652 [2024-12-12 16:03:40.831286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.033 16:03:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:16.033 00:07:16.033 real 0m3.964s 00:07:16.033 user 0m4.444s 00:07:16.033 sys 0m1.011s 00:07:16.033 16:03:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.033 16:03:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:16.033 ************************************ 00:07:16.033 END TEST raid_function_test_raid0 00:07:16.033 ************************************ 00:07:16.033 16:03:42 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:16.033 16:03:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.033 16:03:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.033 16:03:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.033 ************************************ 00:07:16.033 START TEST raid_function_test_concat 00:07:16.033 ************************************ 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=62452 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 62452' 00:07:16.033 Process raid pid: 62452 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 62452 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 62452 ']' 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.033 16:03:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:16.033 [2024-12-12 16:03:42.216000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:16.033 [2024-12-12 16:03:42.216131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.293 [2024-12-12 16:03:42.392571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.293 [2024-12-12 16:03:42.532167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.553 [2024-12-12 16:03:42.783307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.553 [2024-12-12 16:03:42.783366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.812 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.812 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:16.813 Base_1 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:16.813 Base_2 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:16.813 [2024-12-12 16:03:43.140288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:16.813 [2024-12-12 16:03:43.142365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:16.813 [2024-12-12 16:03:43.142440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:16.813 [2024-12-12 16:03:43.142453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:16.813 [2024-12-12 16:03:43.142710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:16.813 [2024-12-12 16:03:43.142938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:16.813 [2024-12-12 16:03:43.142959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:16.813 [2024-12-12 16:03:43.143121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:16.813 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:17.073 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:17.073 [2024-12-12 16:03:43.391961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:17.073 /dev/nbd0 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.332 1+0 records in 00:07:17.332 1+0 records out 00:07:17.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416441 s, 9.8 MB/s 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:17.332 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.333 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:17.333 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:17.333 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:17.333 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.591 { 00:07:17.591 "nbd_device": "/dev/nbd0", 00:07:17.591 "bdev_name": "raid" 00:07:17.591 } 00:07:17.591 ]' 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.591 { 00:07:17.591 "nbd_device": "/dev/nbd0", 00:07:17.591 "bdev_name": "raid" 00:07:17.591 } 00:07:17.591 ]' 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:17.591 4096+0 records in 00:07:17.591 4096+0 records out 00:07:17.591 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0232939 s, 90.0 MB/s 00:07:17.591 16:03:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:17.851 4096+0 records in 00:07:17.851 4096+0 records out 00:07:17.851 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.217714 s, 9.6 MB/s 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:17.851 128+0 records in 00:07:17.851 128+0 records out 00:07:17.851 65536 bytes (66 kB, 64 KiB) copied, 0.00108292 s, 60.5 MB/s 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:17.851 2035+0 records in 00:07:17.851 2035+0 records out 00:07:17.851 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0170612 s, 61.1 MB/s 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:17.851 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:17.851 456+0 records in 00:07:17.851 456+0 records out 00:07:17.851 233472 bytes (233 kB, 228 KiB) copied, 0.00209577 s, 111 MB/s 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.852 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.111 [2024-12-12 16:03:44.347292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:18.111 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 62452 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 62452 ']' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 62452 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62452 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.371 killing process with pid 62452 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62452' 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 62452 00:07:18.371 [2024-12-12 16:03:44.665073] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.371 [2024-12-12 16:03:44.665211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.371 [2024-12-12 16:03:44.665276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.371 [2024-12-12 16:03:44.665289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:18.371 16:03:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 62452 00:07:18.630 [2024-12-12 16:03:44.894692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.014 16:03:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:20.014 00:07:20.014 real 0m4.026s 00:07:20.014 user 0m4.516s 00:07:20.014 sys 0m1.093s 00:07:20.014 16:03:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.014 16:03:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:20.014 ************************************ 00:07:20.014 END TEST raid_function_test_concat 00:07:20.014 ************************************ 00:07:20.014 16:03:46 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:20.014 16:03:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.014 16:03:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.014 16:03:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.014 ************************************ 00:07:20.014 START TEST raid0_resize_test 00:07:20.014 ************************************ 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=62576 00:07:20.014 Process raid pid: 62576 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 62576' 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 62576 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 62576 ']' 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.014 16:03:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.014 [2024-12-12 16:03:46.305966] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:20.014 [2024-12-12 16:03:46.306072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.293 [2024-12-12 16:03:46.483939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.293 [2024-12-12 16:03:46.623625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.552 [2024-12-12 16:03:46.873286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.552 [2024-12-12 16:03:46.873328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.811 Base_1 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.811 Base_2 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.811 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 [2024-12-12 16:03:47.162434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:21.071 [2024-12-12 16:03:47.164783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:21.071 [2024-12-12 16:03:47.164872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:21.071 [2024-12-12 16:03:47.164885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:21.071 [2024-12-12 16:03:47.165255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:21.071 [2024-12-12 16:03:47.165428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:21.071 [2024-12-12 16:03:47.165443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:21.071 [2024-12-12 16:03:47.165656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 [2024-12-12 16:03:47.174413] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:21.071 [2024-12-12 16:03:47.174455] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:21.071 true 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 [2024-12-12 16:03:47.190588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 [2024-12-12 16:03:47.226318] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:21.071 [2024-12-12 16:03:47.226364] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:21.071 [2024-12-12 16:03:47.226401] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:21.071 true 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 [2024-12-12 16:03:47.242408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 62576 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 62576 ']' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 62576 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62576 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.071 killing process with pid 62576 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62576' 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 62576 00:07:21.071 [2024-12-12 16:03:47.310593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.071 [2024-12-12 16:03:47.310717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.071 [2024-12-12 16:03:47.310783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.071 [2024-12-12 16:03:47.310795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:21.071 16:03:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 62576 00:07:21.071 [2024-12-12 16:03:47.330038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.452 16:03:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:22.452 00:07:22.452 real 0m2.382s 00:07:22.452 user 0m2.417s 00:07:22.452 sys 0m0.416s 00:07:22.452 16:03:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.452 16:03:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.452 ************************************ 00:07:22.452 END TEST raid0_resize_test 00:07:22.452 ************************************ 00:07:22.452 16:03:48 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:22.452 16:03:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.452 16:03:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.452 16:03:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.452 ************************************ 00:07:22.452 START TEST raid1_resize_test 00:07:22.452 ************************************ 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=62638 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.452 Process raid pid: 62638 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 62638' 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 62638 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 62638 ']' 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.452 16:03:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.452 [2024-12-12 16:03:48.761488] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:22.452 [2024-12-12 16:03:48.761610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.711 [2024-12-12 16:03:48.935299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.970 [2024-12-12 16:03:49.079489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.230 [2024-12-12 16:03:49.328030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.230 [2024-12-12 16:03:49.328076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 Base_1 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 Base_2 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 [2024-12-12 16:03:49.619379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:23.490 [2024-12-12 16:03:49.621513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:23.490 [2024-12-12 16:03:49.621578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:23.490 [2024-12-12 16:03:49.621590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:23.490 [2024-12-12 16:03:49.621840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:23.490 [2024-12-12 16:03:49.621986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:23.490 [2024-12-12 16:03:49.622074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:23.490 [2024-12-12 16:03:49.622243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 [2024-12-12 16:03:49.631347] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:23.490 [2024-12-12 16:03:49.631381] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:23.490 true 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 [2024-12-12 16:03:49.647467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:23.490 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.491 [2024-12-12 16:03:49.683228] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:23.491 [2024-12-12 16:03:49.683253] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:23.491 [2024-12-12 16:03:49.683274] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:23.491 true 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.491 [2024-12-12 16:03:49.699359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 62638 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 62638 ']' 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 62638 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62638 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.491 killing process with pid 62638 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62638' 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 62638 00:07:23.491 [2024-12-12 16:03:49.777209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.491 [2024-12-12 16:03:49.777323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.491 16:03:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 62638 00:07:23.491 [2024-12-12 16:03:49.777937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.491 [2024-12-12 16:03:49.777968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:23.491 [2024-12-12 16:03:49.797789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.872 16:03:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:24.872 00:07:24.872 real 0m2.374s 00:07:24.872 user 0m2.398s 00:07:24.872 sys 0m0.443s 00:07:24.872 16:03:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.872 16:03:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.872 ************************************ 00:07:24.872 END TEST raid1_resize_test 00:07:24.872 ************************************ 00:07:24.872 16:03:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:24.872 16:03:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:24.872 16:03:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:24.872 16:03:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:24.872 16:03:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.872 16:03:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.872 ************************************ 00:07:24.872 START TEST raid_state_function_test 00:07:24.872 ************************************ 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:24.872 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62699 00:07:24.873 Process raid pid: 62699 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62699' 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62699 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62699 ']' 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.873 16:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.873 [2024-12-12 16:03:51.218130] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:24.873 [2024-12-12 16:03:51.218270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.132 [2024-12-12 16:03:51.391431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.391 [2024-12-12 16:03:51.535779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.650 [2024-12-12 16:03:51.785496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.650 [2024-12-12 16:03:51.785549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.910 [2024-12-12 16:03:52.039619] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.910 [2024-12-12 16:03:52.039684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.910 [2024-12-12 16:03:52.039695] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.910 [2024-12-12 16:03:52.039705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.910 "name": "Existed_Raid", 00:07:25.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.910 "strip_size_kb": 64, 00:07:25.910 "state": "configuring", 00:07:25.910 "raid_level": "raid0", 00:07:25.910 "superblock": false, 00:07:25.910 "num_base_bdevs": 2, 00:07:25.910 "num_base_bdevs_discovered": 0, 00:07:25.910 "num_base_bdevs_operational": 2, 00:07:25.910 "base_bdevs_list": [ 00:07:25.910 { 00:07:25.910 "name": "BaseBdev1", 00:07:25.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.910 "is_configured": false, 00:07:25.910 "data_offset": 0, 00:07:25.910 "data_size": 0 00:07:25.910 }, 00:07:25.910 { 00:07:25.910 "name": "BaseBdev2", 00:07:25.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.910 "is_configured": false, 00:07:25.910 "data_offset": 0, 00:07:25.910 "data_size": 0 00:07:25.910 } 00:07:25.910 ] 00:07:25.910 }' 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.910 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.168 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.169 [2024-12-12 16:03:52.498858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.169 [2024-12-12 16:03:52.498923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.169 [2024-12-12 16:03:52.510803] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.169 [2024-12-12 16:03:52.510857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.169 [2024-12-12 16:03:52.510867] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.169 [2024-12-12 16:03:52.510882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.169 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.428 [2024-12-12 16:03:52.565638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.428 BaseBdev1 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.428 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.428 [ 00:07:26.428 { 00:07:26.428 "name": "BaseBdev1", 00:07:26.428 "aliases": [ 00:07:26.428 "88febafa-3ab5-4181-b715-98f88488bf67" 00:07:26.428 ], 00:07:26.428 "product_name": "Malloc disk", 00:07:26.428 "block_size": 512, 00:07:26.428 "num_blocks": 65536, 00:07:26.428 "uuid": "88febafa-3ab5-4181-b715-98f88488bf67", 00:07:26.428 "assigned_rate_limits": { 00:07:26.428 "rw_ios_per_sec": 0, 00:07:26.428 "rw_mbytes_per_sec": 0, 00:07:26.429 "r_mbytes_per_sec": 0, 00:07:26.429 "w_mbytes_per_sec": 0 00:07:26.429 }, 00:07:26.429 "claimed": true, 00:07:26.429 "claim_type": "exclusive_write", 00:07:26.429 "zoned": false, 00:07:26.429 "supported_io_types": { 00:07:26.429 "read": true, 00:07:26.429 "write": true, 00:07:26.429 "unmap": true, 00:07:26.429 "flush": true, 00:07:26.429 "reset": true, 00:07:26.429 "nvme_admin": false, 00:07:26.429 "nvme_io": false, 00:07:26.429 "nvme_io_md": false, 00:07:26.429 "write_zeroes": true, 00:07:26.429 "zcopy": true, 00:07:26.429 "get_zone_info": false, 00:07:26.429 "zone_management": false, 00:07:26.429 "zone_append": false, 00:07:26.429 "compare": false, 00:07:26.429 "compare_and_write": false, 00:07:26.429 "abort": true, 00:07:26.429 "seek_hole": false, 00:07:26.429 "seek_data": false, 00:07:26.429 "copy": true, 00:07:26.429 "nvme_iov_md": false 00:07:26.429 }, 00:07:26.429 "memory_domains": [ 00:07:26.429 { 00:07:26.429 "dma_device_id": "system", 00:07:26.429 "dma_device_type": 1 00:07:26.429 }, 00:07:26.429 { 00:07:26.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.429 "dma_device_type": 2 00:07:26.429 } 00:07:26.429 ], 00:07:26.429 "driver_specific": {} 00:07:26.429 } 00:07:26.429 ] 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.429 "name": "Existed_Raid", 00:07:26.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.429 "strip_size_kb": 64, 00:07:26.429 "state": "configuring", 00:07:26.429 "raid_level": "raid0", 00:07:26.429 "superblock": false, 00:07:26.429 "num_base_bdevs": 2, 00:07:26.429 "num_base_bdevs_discovered": 1, 00:07:26.429 "num_base_bdevs_operational": 2, 00:07:26.429 "base_bdevs_list": [ 00:07:26.429 { 00:07:26.429 "name": "BaseBdev1", 00:07:26.429 "uuid": "88febafa-3ab5-4181-b715-98f88488bf67", 00:07:26.429 "is_configured": true, 00:07:26.429 "data_offset": 0, 00:07:26.429 "data_size": 65536 00:07:26.429 }, 00:07:26.429 { 00:07:26.429 "name": "BaseBdev2", 00:07:26.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.429 "is_configured": false, 00:07:26.429 "data_offset": 0, 00:07:26.429 "data_size": 0 00:07:26.429 } 00:07:26.429 ] 00:07:26.429 }' 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.429 16:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.689 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.689 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.689 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.949 [2024-12-12 16:03:53.040933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.949 [2024-12-12 16:03:53.040996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.949 [2024-12-12 16:03:53.052935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.949 [2024-12-12 16:03:53.055139] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.949 [2024-12-12 16:03:53.055184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.949 "name": "Existed_Raid", 00:07:26.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.949 "strip_size_kb": 64, 00:07:26.949 "state": "configuring", 00:07:26.949 "raid_level": "raid0", 00:07:26.949 "superblock": false, 00:07:26.949 "num_base_bdevs": 2, 00:07:26.949 "num_base_bdevs_discovered": 1, 00:07:26.949 "num_base_bdevs_operational": 2, 00:07:26.949 "base_bdevs_list": [ 00:07:26.949 { 00:07:26.949 "name": "BaseBdev1", 00:07:26.949 "uuid": "88febafa-3ab5-4181-b715-98f88488bf67", 00:07:26.949 "is_configured": true, 00:07:26.949 "data_offset": 0, 00:07:26.949 "data_size": 65536 00:07:26.949 }, 00:07:26.949 { 00:07:26.949 "name": "BaseBdev2", 00:07:26.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.949 "is_configured": false, 00:07:26.949 "data_offset": 0, 00:07:26.949 "data_size": 0 00:07:26.949 } 00:07:26.949 ] 00:07:26.949 }' 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.949 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.209 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:27.209 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.209 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.469 [2024-12-12 16:03:53.568106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.469 [2024-12-12 16:03:53.568168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:27.469 [2024-12-12 16:03:53.568179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:27.469 [2024-12-12 16:03:53.568487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.469 [2024-12-12 16:03:53.568712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:27.469 [2024-12-12 16:03:53.568733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:27.469 [2024-12-12 16:03:53.569054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.469 BaseBdev2 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.469 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 [ 00:07:27.470 { 00:07:27.470 "name": "BaseBdev2", 00:07:27.470 "aliases": [ 00:07:27.470 "0d57434a-c51c-4252-9c58-b8fb5b511ae4" 00:07:27.470 ], 00:07:27.470 "product_name": "Malloc disk", 00:07:27.470 "block_size": 512, 00:07:27.470 "num_blocks": 65536, 00:07:27.470 "uuid": "0d57434a-c51c-4252-9c58-b8fb5b511ae4", 00:07:27.470 "assigned_rate_limits": { 00:07:27.470 "rw_ios_per_sec": 0, 00:07:27.470 "rw_mbytes_per_sec": 0, 00:07:27.470 "r_mbytes_per_sec": 0, 00:07:27.470 "w_mbytes_per_sec": 0 00:07:27.470 }, 00:07:27.470 "claimed": true, 00:07:27.470 "claim_type": "exclusive_write", 00:07:27.470 "zoned": false, 00:07:27.470 "supported_io_types": { 00:07:27.470 "read": true, 00:07:27.470 "write": true, 00:07:27.470 "unmap": true, 00:07:27.470 "flush": true, 00:07:27.470 "reset": true, 00:07:27.470 "nvme_admin": false, 00:07:27.470 "nvme_io": false, 00:07:27.470 "nvme_io_md": false, 00:07:27.470 "write_zeroes": true, 00:07:27.470 "zcopy": true, 00:07:27.470 "get_zone_info": false, 00:07:27.470 "zone_management": false, 00:07:27.470 "zone_append": false, 00:07:27.470 "compare": false, 00:07:27.470 "compare_and_write": false, 00:07:27.470 "abort": true, 00:07:27.470 "seek_hole": false, 00:07:27.470 "seek_data": false, 00:07:27.470 "copy": true, 00:07:27.470 "nvme_iov_md": false 00:07:27.470 }, 00:07:27.470 "memory_domains": [ 00:07:27.470 { 00:07:27.470 "dma_device_id": "system", 00:07:27.470 "dma_device_type": 1 00:07:27.470 }, 00:07:27.470 { 00:07:27.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.470 "dma_device_type": 2 00:07:27.470 } 00:07:27.470 ], 00:07:27.470 "driver_specific": {} 00:07:27.470 } 00:07:27.470 ] 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.470 "name": "Existed_Raid", 00:07:27.470 "uuid": "e07fd8f7-200f-4c53-9bff-153e65baf5c6", 00:07:27.470 "strip_size_kb": 64, 00:07:27.470 "state": "online", 00:07:27.470 "raid_level": "raid0", 00:07:27.470 "superblock": false, 00:07:27.470 "num_base_bdevs": 2, 00:07:27.470 "num_base_bdevs_discovered": 2, 00:07:27.470 "num_base_bdevs_operational": 2, 00:07:27.470 "base_bdevs_list": [ 00:07:27.470 { 00:07:27.470 "name": "BaseBdev1", 00:07:27.470 "uuid": "88febafa-3ab5-4181-b715-98f88488bf67", 00:07:27.470 "is_configured": true, 00:07:27.470 "data_offset": 0, 00:07:27.470 "data_size": 65536 00:07:27.470 }, 00:07:27.470 { 00:07:27.470 "name": "BaseBdev2", 00:07:27.470 "uuid": "0d57434a-c51c-4252-9c58-b8fb5b511ae4", 00:07:27.470 "is_configured": true, 00:07:27.470 "data_offset": 0, 00:07:27.470 "data_size": 65536 00:07:27.470 } 00:07:27.470 ] 00:07:27.470 }' 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.470 16:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.040 [2024-12-12 16:03:54.115588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.040 "name": "Existed_Raid", 00:07:28.040 "aliases": [ 00:07:28.040 "e07fd8f7-200f-4c53-9bff-153e65baf5c6" 00:07:28.040 ], 00:07:28.040 "product_name": "Raid Volume", 00:07:28.040 "block_size": 512, 00:07:28.040 "num_blocks": 131072, 00:07:28.040 "uuid": "e07fd8f7-200f-4c53-9bff-153e65baf5c6", 00:07:28.040 "assigned_rate_limits": { 00:07:28.040 "rw_ios_per_sec": 0, 00:07:28.040 "rw_mbytes_per_sec": 0, 00:07:28.040 "r_mbytes_per_sec": 0, 00:07:28.040 "w_mbytes_per_sec": 0 00:07:28.040 }, 00:07:28.040 "claimed": false, 00:07:28.040 "zoned": false, 00:07:28.040 "supported_io_types": { 00:07:28.040 "read": true, 00:07:28.040 "write": true, 00:07:28.040 "unmap": true, 00:07:28.040 "flush": true, 00:07:28.040 "reset": true, 00:07:28.040 "nvme_admin": false, 00:07:28.040 "nvme_io": false, 00:07:28.040 "nvme_io_md": false, 00:07:28.040 "write_zeroes": true, 00:07:28.040 "zcopy": false, 00:07:28.040 "get_zone_info": false, 00:07:28.040 "zone_management": false, 00:07:28.040 "zone_append": false, 00:07:28.040 "compare": false, 00:07:28.040 "compare_and_write": false, 00:07:28.040 "abort": false, 00:07:28.040 "seek_hole": false, 00:07:28.040 "seek_data": false, 00:07:28.040 "copy": false, 00:07:28.040 "nvme_iov_md": false 00:07:28.040 }, 00:07:28.040 "memory_domains": [ 00:07:28.040 { 00:07:28.040 "dma_device_id": "system", 00:07:28.040 "dma_device_type": 1 00:07:28.040 }, 00:07:28.040 { 00:07:28.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.040 "dma_device_type": 2 00:07:28.040 }, 00:07:28.040 { 00:07:28.040 "dma_device_id": "system", 00:07:28.040 "dma_device_type": 1 00:07:28.040 }, 00:07:28.040 { 00:07:28.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.040 "dma_device_type": 2 00:07:28.040 } 00:07:28.040 ], 00:07:28.040 "driver_specific": { 00:07:28.040 "raid": { 00:07:28.040 "uuid": "e07fd8f7-200f-4c53-9bff-153e65baf5c6", 00:07:28.040 "strip_size_kb": 64, 00:07:28.040 "state": "online", 00:07:28.040 "raid_level": "raid0", 00:07:28.040 "superblock": false, 00:07:28.040 "num_base_bdevs": 2, 00:07:28.040 "num_base_bdevs_discovered": 2, 00:07:28.040 "num_base_bdevs_operational": 2, 00:07:28.040 "base_bdevs_list": [ 00:07:28.040 { 00:07:28.040 "name": "BaseBdev1", 00:07:28.040 "uuid": "88febafa-3ab5-4181-b715-98f88488bf67", 00:07:28.040 "is_configured": true, 00:07:28.040 "data_offset": 0, 00:07:28.040 "data_size": 65536 00:07:28.040 }, 00:07:28.040 { 00:07:28.040 "name": "BaseBdev2", 00:07:28.040 "uuid": "0d57434a-c51c-4252-9c58-b8fb5b511ae4", 00:07:28.040 "is_configured": true, 00:07:28.040 "data_offset": 0, 00:07:28.040 "data_size": 65536 00:07:28.040 } 00:07:28.040 ] 00:07:28.040 } 00:07:28.040 } 00:07:28.040 }' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:28.040 BaseBdev2' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.040 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.040 [2024-12-12 16:03:54.330955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:28.040 [2024-12-12 16:03:54.330999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.040 [2024-12-12 16:03:54.331067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.300 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.300 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:28.300 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:28.300 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.300 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.301 "name": "Existed_Raid", 00:07:28.301 "uuid": "e07fd8f7-200f-4c53-9bff-153e65baf5c6", 00:07:28.301 "strip_size_kb": 64, 00:07:28.301 "state": "offline", 00:07:28.301 "raid_level": "raid0", 00:07:28.301 "superblock": false, 00:07:28.301 "num_base_bdevs": 2, 00:07:28.301 "num_base_bdevs_discovered": 1, 00:07:28.301 "num_base_bdevs_operational": 1, 00:07:28.301 "base_bdevs_list": [ 00:07:28.301 { 00:07:28.301 "name": null, 00:07:28.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.301 "is_configured": false, 00:07:28.301 "data_offset": 0, 00:07:28.301 "data_size": 65536 00:07:28.301 }, 00:07:28.301 { 00:07:28.301 "name": "BaseBdev2", 00:07:28.301 "uuid": "0d57434a-c51c-4252-9c58-b8fb5b511ae4", 00:07:28.301 "is_configured": true, 00:07:28.301 "data_offset": 0, 00:07:28.301 "data_size": 65536 00:07:28.301 } 00:07:28.301 ] 00:07:28.301 }' 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.301 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.562 16:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.822 [2024-12-12 16:03:54.911982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:28.822 [2024-12-12 16:03:54.912045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62699 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62699 ']' 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62699 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62699 00:07:28.822 killing process with pid 62699 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62699' 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62699 00:07:28.822 [2024-12-12 16:03:55.102434] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.822 16:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62699 00:07:28.822 [2024-12-12 16:03:55.121239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.254 ************************************ 00:07:30.254 END TEST raid_state_function_test 00:07:30.254 ************************************ 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.254 00:07:30.254 real 0m5.222s 00:07:30.254 user 0m7.358s 00:07:30.254 sys 0m0.955s 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.254 16:03:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:30.254 16:03:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:30.254 16:03:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.254 16:03:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.254 ************************************ 00:07:30.254 START TEST raid_state_function_test_sb 00:07:30.254 ************************************ 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62948 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62948' 00:07:30.254 Process raid pid: 62948 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62948 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62948 ']' 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.254 16:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.254 [2024-12-12 16:03:56.516708] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:30.254 [2024-12-12 16:03:56.516827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.514 [2024-12-12 16:03:56.700598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.514 [2024-12-12 16:03:56.840823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.773 [2024-12-12 16:03:57.082646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.773 [2024-12-12 16:03:57.082697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.032 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.032 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:31.032 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.032 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.032 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.032 [2024-12-12 16:03:57.370126] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.032 [2024-12-12 16:03:57.370188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.032 [2024-12-12 16:03:57.370199] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.032 [2024-12-12 16:03:57.370208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.032 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.032 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.033 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.292 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.292 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.292 "name": "Existed_Raid", 00:07:31.292 "uuid": "dc4528cf-696c-4fc6-9cd9-607dcb447299", 00:07:31.292 "strip_size_kb": 64, 00:07:31.292 "state": "configuring", 00:07:31.292 "raid_level": "raid0", 00:07:31.292 "superblock": true, 00:07:31.292 "num_base_bdevs": 2, 00:07:31.292 "num_base_bdevs_discovered": 0, 00:07:31.292 "num_base_bdevs_operational": 2, 00:07:31.292 "base_bdevs_list": [ 00:07:31.292 { 00:07:31.292 "name": "BaseBdev1", 00:07:31.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.292 "is_configured": false, 00:07:31.292 "data_offset": 0, 00:07:31.292 "data_size": 0 00:07:31.292 }, 00:07:31.292 { 00:07:31.292 "name": "BaseBdev2", 00:07:31.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.292 "is_configured": false, 00:07:31.292 "data_offset": 0, 00:07:31.292 "data_size": 0 00:07:31.292 } 00:07:31.292 ] 00:07:31.292 }' 00:07:31.292 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.292 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.552 [2024-12-12 16:03:57.865238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.552 [2024-12-12 16:03:57.865343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.552 [2024-12-12 16:03:57.877188] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.552 [2024-12-12 16:03:57.877272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.552 [2024-12-12 16:03:57.877300] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.552 [2024-12-12 16:03:57.877326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.552 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.813 [2024-12-12 16:03:57.930430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.813 BaseBdev1 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.813 [ 00:07:31.813 { 00:07:31.813 "name": "BaseBdev1", 00:07:31.813 "aliases": [ 00:07:31.813 "edba8884-4e0d-4be7-a79b-f7cbe11fd4e6" 00:07:31.813 ], 00:07:31.813 "product_name": "Malloc disk", 00:07:31.813 "block_size": 512, 00:07:31.813 "num_blocks": 65536, 00:07:31.813 "uuid": "edba8884-4e0d-4be7-a79b-f7cbe11fd4e6", 00:07:31.813 "assigned_rate_limits": { 00:07:31.813 "rw_ios_per_sec": 0, 00:07:31.813 "rw_mbytes_per_sec": 0, 00:07:31.813 "r_mbytes_per_sec": 0, 00:07:31.813 "w_mbytes_per_sec": 0 00:07:31.813 }, 00:07:31.813 "claimed": true, 00:07:31.813 "claim_type": "exclusive_write", 00:07:31.813 "zoned": false, 00:07:31.813 "supported_io_types": { 00:07:31.813 "read": true, 00:07:31.813 "write": true, 00:07:31.813 "unmap": true, 00:07:31.813 "flush": true, 00:07:31.813 "reset": true, 00:07:31.813 "nvme_admin": false, 00:07:31.813 "nvme_io": false, 00:07:31.813 "nvme_io_md": false, 00:07:31.813 "write_zeroes": true, 00:07:31.813 "zcopy": true, 00:07:31.813 "get_zone_info": false, 00:07:31.813 "zone_management": false, 00:07:31.813 "zone_append": false, 00:07:31.813 "compare": false, 00:07:31.813 "compare_and_write": false, 00:07:31.813 "abort": true, 00:07:31.813 "seek_hole": false, 00:07:31.813 "seek_data": false, 00:07:31.813 "copy": true, 00:07:31.813 "nvme_iov_md": false 00:07:31.813 }, 00:07:31.813 "memory_domains": [ 00:07:31.813 { 00:07:31.813 "dma_device_id": "system", 00:07:31.813 "dma_device_type": 1 00:07:31.813 }, 00:07:31.813 { 00:07:31.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.813 "dma_device_type": 2 00:07:31.813 } 00:07:31.813 ], 00:07:31.813 "driver_specific": {} 00:07:31.813 } 00:07:31.813 ] 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.813 16:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.813 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.813 "name": "Existed_Raid", 00:07:31.813 "uuid": "f693519a-9614-488f-8c91-47bcb956bf7d", 00:07:31.813 "strip_size_kb": 64, 00:07:31.813 "state": "configuring", 00:07:31.813 "raid_level": "raid0", 00:07:31.813 "superblock": true, 00:07:31.813 "num_base_bdevs": 2, 00:07:31.813 "num_base_bdevs_discovered": 1, 00:07:31.813 "num_base_bdevs_operational": 2, 00:07:31.813 "base_bdevs_list": [ 00:07:31.813 { 00:07:31.813 "name": "BaseBdev1", 00:07:31.813 "uuid": "edba8884-4e0d-4be7-a79b-f7cbe11fd4e6", 00:07:31.813 "is_configured": true, 00:07:31.813 "data_offset": 2048, 00:07:31.813 "data_size": 63488 00:07:31.813 }, 00:07:31.813 { 00:07:31.813 "name": "BaseBdev2", 00:07:31.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.813 "is_configured": false, 00:07:31.813 "data_offset": 0, 00:07:31.813 "data_size": 0 00:07:31.813 } 00:07:31.813 ] 00:07:31.813 }' 00:07:31.813 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.813 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.383 [2024-12-12 16:03:58.429636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.383 [2024-12-12 16:03:58.429759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.383 [2024-12-12 16:03:58.441645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.383 [2024-12-12 16:03:58.443744] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.383 [2024-12-12 16:03:58.443788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.383 "name": "Existed_Raid", 00:07:32.383 "uuid": "2a2dfe0c-a75d-42bf-a34b-c2dac474372c", 00:07:32.383 "strip_size_kb": 64, 00:07:32.383 "state": "configuring", 00:07:32.383 "raid_level": "raid0", 00:07:32.383 "superblock": true, 00:07:32.383 "num_base_bdevs": 2, 00:07:32.383 "num_base_bdevs_discovered": 1, 00:07:32.383 "num_base_bdevs_operational": 2, 00:07:32.383 "base_bdevs_list": [ 00:07:32.383 { 00:07:32.383 "name": "BaseBdev1", 00:07:32.383 "uuid": "edba8884-4e0d-4be7-a79b-f7cbe11fd4e6", 00:07:32.383 "is_configured": true, 00:07:32.383 "data_offset": 2048, 00:07:32.383 "data_size": 63488 00:07:32.383 }, 00:07:32.383 { 00:07:32.383 "name": "BaseBdev2", 00:07:32.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.383 "is_configured": false, 00:07:32.383 "data_offset": 0, 00:07:32.383 "data_size": 0 00:07:32.383 } 00:07:32.383 ] 00:07:32.383 }' 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.383 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.643 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 [2024-12-12 16:03:58.890153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.644 [2024-12-12 16:03:58.890574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.644 [2024-12-12 16:03:58.890634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.644 [2024-12-12 16:03:58.890970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.644 [2024-12-12 16:03:58.891210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.644 [2024-12-12 16:03:58.891264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:32.644 BaseBdev2 00:07:32.644 [2024-12-12 16:03:58.891469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 [ 00:07:32.644 { 00:07:32.644 "name": "BaseBdev2", 00:07:32.644 "aliases": [ 00:07:32.644 "8792f081-784c-4864-96db-fedc00303c17" 00:07:32.644 ], 00:07:32.644 "product_name": "Malloc disk", 00:07:32.644 "block_size": 512, 00:07:32.644 "num_blocks": 65536, 00:07:32.644 "uuid": "8792f081-784c-4864-96db-fedc00303c17", 00:07:32.644 "assigned_rate_limits": { 00:07:32.644 "rw_ios_per_sec": 0, 00:07:32.644 "rw_mbytes_per_sec": 0, 00:07:32.644 "r_mbytes_per_sec": 0, 00:07:32.644 "w_mbytes_per_sec": 0 00:07:32.644 }, 00:07:32.644 "claimed": true, 00:07:32.644 "claim_type": "exclusive_write", 00:07:32.644 "zoned": false, 00:07:32.644 "supported_io_types": { 00:07:32.644 "read": true, 00:07:32.644 "write": true, 00:07:32.644 "unmap": true, 00:07:32.644 "flush": true, 00:07:32.644 "reset": true, 00:07:32.644 "nvme_admin": false, 00:07:32.644 "nvme_io": false, 00:07:32.644 "nvme_io_md": false, 00:07:32.644 "write_zeroes": true, 00:07:32.644 "zcopy": true, 00:07:32.644 "get_zone_info": false, 00:07:32.644 "zone_management": false, 00:07:32.644 "zone_append": false, 00:07:32.644 "compare": false, 00:07:32.644 "compare_and_write": false, 00:07:32.644 "abort": true, 00:07:32.644 "seek_hole": false, 00:07:32.644 "seek_data": false, 00:07:32.644 "copy": true, 00:07:32.644 "nvme_iov_md": false 00:07:32.644 }, 00:07:32.644 "memory_domains": [ 00:07:32.644 { 00:07:32.644 "dma_device_id": "system", 00:07:32.644 "dma_device_type": 1 00:07:32.644 }, 00:07:32.644 { 00:07:32.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.644 "dma_device_type": 2 00:07:32.644 } 00:07:32.644 ], 00:07:32.644 "driver_specific": {} 00:07:32.644 } 00:07:32.644 ] 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.644 "name": "Existed_Raid", 00:07:32.644 "uuid": "2a2dfe0c-a75d-42bf-a34b-c2dac474372c", 00:07:32.644 "strip_size_kb": 64, 00:07:32.644 "state": "online", 00:07:32.644 "raid_level": "raid0", 00:07:32.644 "superblock": true, 00:07:32.644 "num_base_bdevs": 2, 00:07:32.644 "num_base_bdevs_discovered": 2, 00:07:32.644 "num_base_bdevs_operational": 2, 00:07:32.644 "base_bdevs_list": [ 00:07:32.644 { 00:07:32.644 "name": "BaseBdev1", 00:07:32.644 "uuid": "edba8884-4e0d-4be7-a79b-f7cbe11fd4e6", 00:07:32.644 "is_configured": true, 00:07:32.644 "data_offset": 2048, 00:07:32.644 "data_size": 63488 00:07:32.644 }, 00:07:32.644 { 00:07:32.644 "name": "BaseBdev2", 00:07:32.644 "uuid": "8792f081-784c-4864-96db-fedc00303c17", 00:07:32.644 "is_configured": true, 00:07:32.644 "data_offset": 2048, 00:07:32.644 "data_size": 63488 00:07:32.644 } 00:07:32.644 ] 00:07:32.644 }' 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.644 16:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.213 [2024-12-12 16:03:59.385712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.213 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.213 "name": "Existed_Raid", 00:07:33.213 "aliases": [ 00:07:33.213 "2a2dfe0c-a75d-42bf-a34b-c2dac474372c" 00:07:33.213 ], 00:07:33.213 "product_name": "Raid Volume", 00:07:33.213 "block_size": 512, 00:07:33.213 "num_blocks": 126976, 00:07:33.213 "uuid": "2a2dfe0c-a75d-42bf-a34b-c2dac474372c", 00:07:33.213 "assigned_rate_limits": { 00:07:33.213 "rw_ios_per_sec": 0, 00:07:33.213 "rw_mbytes_per_sec": 0, 00:07:33.213 "r_mbytes_per_sec": 0, 00:07:33.213 "w_mbytes_per_sec": 0 00:07:33.213 }, 00:07:33.213 "claimed": false, 00:07:33.213 "zoned": false, 00:07:33.213 "supported_io_types": { 00:07:33.213 "read": true, 00:07:33.213 "write": true, 00:07:33.213 "unmap": true, 00:07:33.213 "flush": true, 00:07:33.213 "reset": true, 00:07:33.213 "nvme_admin": false, 00:07:33.213 "nvme_io": false, 00:07:33.213 "nvme_io_md": false, 00:07:33.213 "write_zeroes": true, 00:07:33.213 "zcopy": false, 00:07:33.213 "get_zone_info": false, 00:07:33.213 "zone_management": false, 00:07:33.213 "zone_append": false, 00:07:33.213 "compare": false, 00:07:33.213 "compare_and_write": false, 00:07:33.213 "abort": false, 00:07:33.213 "seek_hole": false, 00:07:33.213 "seek_data": false, 00:07:33.213 "copy": false, 00:07:33.213 "nvme_iov_md": false 00:07:33.213 }, 00:07:33.213 "memory_domains": [ 00:07:33.213 { 00:07:33.213 "dma_device_id": "system", 00:07:33.213 "dma_device_type": 1 00:07:33.213 }, 00:07:33.213 { 00:07:33.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.213 "dma_device_type": 2 00:07:33.213 }, 00:07:33.213 { 00:07:33.213 "dma_device_id": "system", 00:07:33.213 "dma_device_type": 1 00:07:33.213 }, 00:07:33.213 { 00:07:33.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.213 "dma_device_type": 2 00:07:33.213 } 00:07:33.213 ], 00:07:33.213 "driver_specific": { 00:07:33.213 "raid": { 00:07:33.214 "uuid": "2a2dfe0c-a75d-42bf-a34b-c2dac474372c", 00:07:33.214 "strip_size_kb": 64, 00:07:33.214 "state": "online", 00:07:33.214 "raid_level": "raid0", 00:07:33.214 "superblock": true, 00:07:33.214 "num_base_bdevs": 2, 00:07:33.214 "num_base_bdevs_discovered": 2, 00:07:33.214 "num_base_bdevs_operational": 2, 00:07:33.214 "base_bdevs_list": [ 00:07:33.214 { 00:07:33.214 "name": "BaseBdev1", 00:07:33.214 "uuid": "edba8884-4e0d-4be7-a79b-f7cbe11fd4e6", 00:07:33.214 "is_configured": true, 00:07:33.214 "data_offset": 2048, 00:07:33.214 "data_size": 63488 00:07:33.214 }, 00:07:33.214 { 00:07:33.214 "name": "BaseBdev2", 00:07:33.214 "uuid": "8792f081-784c-4864-96db-fedc00303c17", 00:07:33.214 "is_configured": true, 00:07:33.214 "data_offset": 2048, 00:07:33.214 "data_size": 63488 00:07:33.214 } 00:07:33.214 ] 00:07:33.214 } 00:07:33.214 } 00:07:33.214 }' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:33.214 BaseBdev2' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.214 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.474 [2024-12-12 16:03:59.585064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:33.474 [2024-12-12 16:03:59.585105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.474 [2024-12-12 16:03:59.585161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.474 "name": "Existed_Raid", 00:07:33.474 "uuid": "2a2dfe0c-a75d-42bf-a34b-c2dac474372c", 00:07:33.474 "strip_size_kb": 64, 00:07:33.474 "state": "offline", 00:07:33.474 "raid_level": "raid0", 00:07:33.474 "superblock": true, 00:07:33.474 "num_base_bdevs": 2, 00:07:33.474 "num_base_bdevs_discovered": 1, 00:07:33.474 "num_base_bdevs_operational": 1, 00:07:33.474 "base_bdevs_list": [ 00:07:33.474 { 00:07:33.474 "name": null, 00:07:33.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.474 "is_configured": false, 00:07:33.474 "data_offset": 0, 00:07:33.474 "data_size": 63488 00:07:33.474 }, 00:07:33.474 { 00:07:33.474 "name": "BaseBdev2", 00:07:33.474 "uuid": "8792f081-784c-4864-96db-fedc00303c17", 00:07:33.474 "is_configured": true, 00:07:33.474 "data_offset": 2048, 00:07:33.474 "data_size": 63488 00:07:33.474 } 00:07:33.474 ] 00:07:33.474 }' 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.474 16:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.044 [2024-12-12 16:04:00.216653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:34.044 [2024-12-12 16:04:00.216774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62948 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62948 ']' 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62948 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.044 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62948 00:07:34.303 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.303 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.304 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62948' 00:07:34.304 killing process with pid 62948 00:07:34.304 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62948 00:07:34.304 [2024-12-12 16:04:00.417311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.304 16:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62948 00:07:34.304 [2024-12-12 16:04:00.434703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.684 16:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.684 00:07:35.684 real 0m5.255s 00:07:35.684 user 0m7.417s 00:07:35.684 sys 0m0.943s 00:07:35.684 16:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.684 ************************************ 00:07:35.684 END TEST raid_state_function_test_sb 00:07:35.684 ************************************ 00:07:35.684 16:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.684 16:04:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:35.684 16:04:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:35.684 16:04:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.684 16:04:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.684 ************************************ 00:07:35.684 START TEST raid_superblock_test 00:07:35.684 ************************************ 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63206 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63206 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63206 ']' 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.684 16:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.684 [2024-12-12 16:04:01.843597] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:35.684 [2024-12-12 16:04:01.843932] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63206 ] 00:07:35.944 [2024-12-12 16:04:02.047802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.944 [2024-12-12 16:04:02.185825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.203 [2024-12-12 16:04:02.425576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.203 [2024-12-12 16:04:02.425664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.463 malloc1 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.463 [2024-12-12 16:04:02.735324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:36.463 [2024-12-12 16:04:02.735467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.463 [2024-12-12 16:04:02.735516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:36.463 [2024-12-12 16:04:02.735552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.463 [2024-12-12 16:04:02.738156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.463 [2024-12-12 16:04:02.738234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:36.463 pt1 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.463 malloc2 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.463 [2024-12-12 16:04:02.801743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:36.463 [2024-12-12 16:04:02.801849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.463 [2024-12-12 16:04:02.801909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:36.463 [2024-12-12 16:04:02.801940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.463 [2024-12-12 16:04:02.804446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.463 [2024-12-12 16:04:02.804532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:36.463 pt2 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.463 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.723 [2024-12-12 16:04:02.813782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:36.723 [2024-12-12 16:04:02.815934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:36.723 [2024-12-12 16:04:02.816097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:36.723 [2024-12-12 16:04:02.816111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.723 [2024-12-12 16:04:02.816364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:36.723 [2024-12-12 16:04:02.816544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:36.723 [2024-12-12 16:04:02.816556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:36.723 [2024-12-12 16:04:02.816713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.723 "name": "raid_bdev1", 00:07:36.723 "uuid": "52b33abe-e28e-44d1-ae43-bf2179686f1f", 00:07:36.723 "strip_size_kb": 64, 00:07:36.723 "state": "online", 00:07:36.723 "raid_level": "raid0", 00:07:36.723 "superblock": true, 00:07:36.723 "num_base_bdevs": 2, 00:07:36.723 "num_base_bdevs_discovered": 2, 00:07:36.723 "num_base_bdevs_operational": 2, 00:07:36.723 "base_bdevs_list": [ 00:07:36.723 { 00:07:36.723 "name": "pt1", 00:07:36.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.723 "is_configured": true, 00:07:36.723 "data_offset": 2048, 00:07:36.723 "data_size": 63488 00:07:36.723 }, 00:07:36.723 { 00:07:36.723 "name": "pt2", 00:07:36.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.723 "is_configured": true, 00:07:36.723 "data_offset": 2048, 00:07:36.723 "data_size": 63488 00:07:36.723 } 00:07:36.723 ] 00:07:36.723 }' 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.723 16:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.982 [2024-12-12 16:04:03.245364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:36.982 "name": "raid_bdev1", 00:07:36.982 "aliases": [ 00:07:36.982 "52b33abe-e28e-44d1-ae43-bf2179686f1f" 00:07:36.982 ], 00:07:36.982 "product_name": "Raid Volume", 00:07:36.982 "block_size": 512, 00:07:36.982 "num_blocks": 126976, 00:07:36.982 "uuid": "52b33abe-e28e-44d1-ae43-bf2179686f1f", 00:07:36.982 "assigned_rate_limits": { 00:07:36.982 "rw_ios_per_sec": 0, 00:07:36.982 "rw_mbytes_per_sec": 0, 00:07:36.982 "r_mbytes_per_sec": 0, 00:07:36.982 "w_mbytes_per_sec": 0 00:07:36.982 }, 00:07:36.982 "claimed": false, 00:07:36.982 "zoned": false, 00:07:36.982 "supported_io_types": { 00:07:36.982 "read": true, 00:07:36.982 "write": true, 00:07:36.982 "unmap": true, 00:07:36.982 "flush": true, 00:07:36.982 "reset": true, 00:07:36.982 "nvme_admin": false, 00:07:36.982 "nvme_io": false, 00:07:36.982 "nvme_io_md": false, 00:07:36.982 "write_zeroes": true, 00:07:36.982 "zcopy": false, 00:07:36.982 "get_zone_info": false, 00:07:36.982 "zone_management": false, 00:07:36.982 "zone_append": false, 00:07:36.982 "compare": false, 00:07:36.982 "compare_and_write": false, 00:07:36.982 "abort": false, 00:07:36.982 "seek_hole": false, 00:07:36.982 "seek_data": false, 00:07:36.982 "copy": false, 00:07:36.982 "nvme_iov_md": false 00:07:36.982 }, 00:07:36.982 "memory_domains": [ 00:07:36.982 { 00:07:36.982 "dma_device_id": "system", 00:07:36.982 "dma_device_type": 1 00:07:36.982 }, 00:07:36.982 { 00:07:36.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.982 "dma_device_type": 2 00:07:36.982 }, 00:07:36.982 { 00:07:36.982 "dma_device_id": "system", 00:07:36.982 "dma_device_type": 1 00:07:36.982 }, 00:07:36.982 { 00:07:36.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.982 "dma_device_type": 2 00:07:36.982 } 00:07:36.982 ], 00:07:36.982 "driver_specific": { 00:07:36.982 "raid": { 00:07:36.982 "uuid": "52b33abe-e28e-44d1-ae43-bf2179686f1f", 00:07:36.982 "strip_size_kb": 64, 00:07:36.982 "state": "online", 00:07:36.982 "raid_level": "raid0", 00:07:36.982 "superblock": true, 00:07:36.982 "num_base_bdevs": 2, 00:07:36.982 "num_base_bdevs_discovered": 2, 00:07:36.982 "num_base_bdevs_operational": 2, 00:07:36.982 "base_bdevs_list": [ 00:07:36.982 { 00:07:36.982 "name": "pt1", 00:07:36.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.982 "is_configured": true, 00:07:36.982 "data_offset": 2048, 00:07:36.982 "data_size": 63488 00:07:36.982 }, 00:07:36.982 { 00:07:36.982 "name": "pt2", 00:07:36.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.982 "is_configured": true, 00:07:36.982 "data_offset": 2048, 00:07:36.982 "data_size": 63488 00:07:36.982 } 00:07:36.982 ] 00:07:36.982 } 00:07:36.982 } 00:07:36.982 }' 00:07:36.982 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:37.242 pt2' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 [2024-12-12 16:04:03.472927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=52b33abe-e28e-44d1-ae43-bf2179686f1f 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 52b33abe-e28e-44d1-ae43-bf2179686f1f ']' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 [2024-12-12 16:04:03.520530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.242 [2024-12-12 16:04:03.520598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.242 [2024-12-12 16:04:03.520718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.242 [2024-12-12 16:04:03.520813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.242 [2024-12-12 16:04:03.520881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:37.242 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.503 [2024-12-12 16:04:03.628380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:37.503 [2024-12-12 16:04:03.630667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:37.503 [2024-12-12 16:04:03.630736] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:37.503 [2024-12-12 16:04:03.630786] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:37.503 [2024-12-12 16:04:03.630800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.503 [2024-12-12 16:04:03.630814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:37.503 request: 00:07:37.503 { 00:07:37.503 "name": "raid_bdev1", 00:07:37.503 "raid_level": "raid0", 00:07:37.503 "base_bdevs": [ 00:07:37.503 "malloc1", 00:07:37.503 "malloc2" 00:07:37.503 ], 00:07:37.503 "strip_size_kb": 64, 00:07:37.503 "superblock": false, 00:07:37.503 "method": "bdev_raid_create", 00:07:37.503 "req_id": 1 00:07:37.503 } 00:07:37.503 Got JSON-RPC error response 00:07:37.503 response: 00:07:37.503 { 00:07:37.503 "code": -17, 00:07:37.503 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:37.503 } 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.503 [2024-12-12 16:04:03.696261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:37.503 [2024-12-12 16:04:03.696387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.503 [2024-12-12 16:04:03.696441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:37.503 [2024-12-12 16:04:03.696491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.503 [2024-12-12 16:04:03.699179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.503 [2024-12-12 16:04:03.699251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:37.503 [2024-12-12 16:04:03.699387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:37.503 [2024-12-12 16:04:03.699476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:37.503 pt1 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.503 "name": "raid_bdev1", 00:07:37.503 "uuid": "52b33abe-e28e-44d1-ae43-bf2179686f1f", 00:07:37.503 "strip_size_kb": 64, 00:07:37.503 "state": "configuring", 00:07:37.503 "raid_level": "raid0", 00:07:37.503 "superblock": true, 00:07:37.503 "num_base_bdevs": 2, 00:07:37.503 "num_base_bdevs_discovered": 1, 00:07:37.503 "num_base_bdevs_operational": 2, 00:07:37.503 "base_bdevs_list": [ 00:07:37.503 { 00:07:37.503 "name": "pt1", 00:07:37.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.503 "is_configured": true, 00:07:37.503 "data_offset": 2048, 00:07:37.503 "data_size": 63488 00:07:37.503 }, 00:07:37.503 { 00:07:37.503 "name": null, 00:07:37.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.503 "is_configured": false, 00:07:37.503 "data_offset": 2048, 00:07:37.503 "data_size": 63488 00:07:37.503 } 00:07:37.503 ] 00:07:37.503 }' 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.503 16:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.763 [2024-12-12 16:04:04.103854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.763 [2024-12-12 16:04:04.103950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.763 [2024-12-12 16:04:04.103977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:37.763 [2024-12-12 16:04:04.104006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.763 [2024-12-12 16:04:04.104557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.763 [2024-12-12 16:04:04.104587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.763 [2024-12-12 16:04:04.104695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:37.763 [2024-12-12 16:04:04.104728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.763 [2024-12-12 16:04:04.104870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.763 [2024-12-12 16:04:04.104888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.763 [2024-12-12 16:04:04.105184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:37.763 [2024-12-12 16:04:04.105357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.763 [2024-12-12 16:04:04.105366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:37.763 [2024-12-12 16:04:04.105523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.763 pt2 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.763 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.023 "name": "raid_bdev1", 00:07:38.023 "uuid": "52b33abe-e28e-44d1-ae43-bf2179686f1f", 00:07:38.023 "strip_size_kb": 64, 00:07:38.023 "state": "online", 00:07:38.023 "raid_level": "raid0", 00:07:38.023 "superblock": true, 00:07:38.023 "num_base_bdevs": 2, 00:07:38.023 "num_base_bdevs_discovered": 2, 00:07:38.023 "num_base_bdevs_operational": 2, 00:07:38.023 "base_bdevs_list": [ 00:07:38.023 { 00:07:38.023 "name": "pt1", 00:07:38.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.023 "is_configured": true, 00:07:38.023 "data_offset": 2048, 00:07:38.023 "data_size": 63488 00:07:38.023 }, 00:07:38.023 { 00:07:38.023 "name": "pt2", 00:07:38.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.023 "is_configured": true, 00:07:38.023 "data_offset": 2048, 00:07:38.023 "data_size": 63488 00:07:38.023 } 00:07:38.023 ] 00:07:38.023 }' 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.023 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.282 [2024-12-12 16:04:04.511373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.282 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.282 "name": "raid_bdev1", 00:07:38.282 "aliases": [ 00:07:38.282 "52b33abe-e28e-44d1-ae43-bf2179686f1f" 00:07:38.282 ], 00:07:38.282 "product_name": "Raid Volume", 00:07:38.282 "block_size": 512, 00:07:38.282 "num_blocks": 126976, 00:07:38.282 "uuid": "52b33abe-e28e-44d1-ae43-bf2179686f1f", 00:07:38.282 "assigned_rate_limits": { 00:07:38.282 "rw_ios_per_sec": 0, 00:07:38.282 "rw_mbytes_per_sec": 0, 00:07:38.282 "r_mbytes_per_sec": 0, 00:07:38.282 "w_mbytes_per_sec": 0 00:07:38.282 }, 00:07:38.282 "claimed": false, 00:07:38.282 "zoned": false, 00:07:38.282 "supported_io_types": { 00:07:38.282 "read": true, 00:07:38.282 "write": true, 00:07:38.282 "unmap": true, 00:07:38.282 "flush": true, 00:07:38.282 "reset": true, 00:07:38.282 "nvme_admin": false, 00:07:38.282 "nvme_io": false, 00:07:38.282 "nvme_io_md": false, 00:07:38.283 "write_zeroes": true, 00:07:38.283 "zcopy": false, 00:07:38.283 "get_zone_info": false, 00:07:38.283 "zone_management": false, 00:07:38.283 "zone_append": false, 00:07:38.283 "compare": false, 00:07:38.283 "compare_and_write": false, 00:07:38.283 "abort": false, 00:07:38.283 "seek_hole": false, 00:07:38.283 "seek_data": false, 00:07:38.283 "copy": false, 00:07:38.283 "nvme_iov_md": false 00:07:38.283 }, 00:07:38.283 "memory_domains": [ 00:07:38.283 { 00:07:38.283 "dma_device_id": "system", 00:07:38.283 "dma_device_type": 1 00:07:38.283 }, 00:07:38.283 { 00:07:38.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.283 "dma_device_type": 2 00:07:38.283 }, 00:07:38.283 { 00:07:38.283 "dma_device_id": "system", 00:07:38.283 "dma_device_type": 1 00:07:38.283 }, 00:07:38.283 { 00:07:38.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.283 "dma_device_type": 2 00:07:38.283 } 00:07:38.283 ], 00:07:38.283 "driver_specific": { 00:07:38.283 "raid": { 00:07:38.283 "uuid": "52b33abe-e28e-44d1-ae43-bf2179686f1f", 00:07:38.283 "strip_size_kb": 64, 00:07:38.283 "state": "online", 00:07:38.283 "raid_level": "raid0", 00:07:38.283 "superblock": true, 00:07:38.283 "num_base_bdevs": 2, 00:07:38.283 "num_base_bdevs_discovered": 2, 00:07:38.283 "num_base_bdevs_operational": 2, 00:07:38.283 "base_bdevs_list": [ 00:07:38.283 { 00:07:38.283 "name": "pt1", 00:07:38.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.283 "is_configured": true, 00:07:38.283 "data_offset": 2048, 00:07:38.283 "data_size": 63488 00:07:38.283 }, 00:07:38.283 { 00:07:38.283 "name": "pt2", 00:07:38.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.283 "is_configured": true, 00:07:38.283 "data_offset": 2048, 00:07:38.283 "data_size": 63488 00:07:38.283 } 00:07:38.283 ] 00:07:38.283 } 00:07:38.283 } 00:07:38.283 }' 00:07:38.283 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.283 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:38.283 pt2' 00:07:38.283 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.542 [2024-12-12 16:04:04.758917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 52b33abe-e28e-44d1-ae43-bf2179686f1f '!=' 52b33abe-e28e-44d1-ae43-bf2179686f1f ']' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63206 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63206 ']' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63206 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63206 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63206' 00:07:38.542 killing process with pid 63206 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63206 00:07:38.542 [2024-12-12 16:04:04.844824] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.542 [2024-12-12 16:04:04.845005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.542 16:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63206 00:07:38.542 [2024-12-12 16:04:04.845100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.542 [2024-12-12 16:04:04.845117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:38.802 [2024-12-12 16:04:05.083213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.181 16:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:40.181 00:07:40.181 real 0m4.692s 00:07:40.181 user 0m6.318s 00:07:40.181 sys 0m0.851s 00:07:40.181 16:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.181 16:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.181 ************************************ 00:07:40.181 END TEST raid_superblock_test 00:07:40.181 ************************************ 00:07:40.181 16:04:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:40.181 16:04:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.181 16:04:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.181 16:04:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.181 ************************************ 00:07:40.181 START TEST raid_read_error_test 00:07:40.181 ************************************ 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ze8LzTaqIf 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63417 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63417 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63417 ']' 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.181 16:04:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.182 16:04:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.182 16:04:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.182 16:04:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.441 [2024-12-12 16:04:06.613742] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:40.441 [2024-12-12 16:04:06.613971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63417 ] 00:07:40.700 [2024-12-12 16:04:06.794271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.700 [2024-12-12 16:04:06.954703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.960 [2024-12-12 16:04:07.226303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.960 [2024-12-12 16:04:07.226425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.218 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.218 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:41.218 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.218 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:41.218 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.219 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.477 BaseBdev1_malloc 00:07:41.477 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.478 true 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.478 [2024-12-12 16:04:07.589403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:41.478 [2024-12-12 16:04:07.589476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.478 [2024-12-12 16:04:07.589502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:41.478 [2024-12-12 16:04:07.589516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.478 [2024-12-12 16:04:07.592406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.478 [2024-12-12 16:04:07.592519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:41.478 BaseBdev1 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.478 BaseBdev2_malloc 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.478 true 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.478 [2024-12-12 16:04:07.668498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:41.478 [2024-12-12 16:04:07.668574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.478 [2024-12-12 16:04:07.668597] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:41.478 [2024-12-12 16:04:07.668610] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.478 [2024-12-12 16:04:07.671361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.478 [2024-12-12 16:04:07.671408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:41.478 BaseBdev2 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.478 [2024-12-12 16:04:07.680555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.478 [2024-12-12 16:04:07.683044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.478 [2024-12-12 16:04:07.683263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:41.478 [2024-12-12 16:04:07.683284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.478 [2024-12-12 16:04:07.683553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:41.478 [2024-12-12 16:04:07.683792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:41.478 [2024-12-12 16:04:07.683809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:41.478 [2024-12-12 16:04:07.684001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.478 "name": "raid_bdev1", 00:07:41.478 "uuid": "8cc66dc5-c1e0-45d6-8235-ba20154196fd", 00:07:41.478 "strip_size_kb": 64, 00:07:41.478 "state": "online", 00:07:41.478 "raid_level": "raid0", 00:07:41.478 "superblock": true, 00:07:41.478 "num_base_bdevs": 2, 00:07:41.478 "num_base_bdevs_discovered": 2, 00:07:41.478 "num_base_bdevs_operational": 2, 00:07:41.478 "base_bdevs_list": [ 00:07:41.478 { 00:07:41.478 "name": "BaseBdev1", 00:07:41.478 "uuid": "c23f53c5-885a-50a1-96eb-7983d6d8f636", 00:07:41.478 "is_configured": true, 00:07:41.478 "data_offset": 2048, 00:07:41.478 "data_size": 63488 00:07:41.478 }, 00:07:41.478 { 00:07:41.478 "name": "BaseBdev2", 00:07:41.478 "uuid": "6ab59a35-f237-5bfd-9523-65d29dd3d977", 00:07:41.478 "is_configured": true, 00:07:41.478 "data_offset": 2048, 00:07:41.478 "data_size": 63488 00:07:41.478 } 00:07:41.478 ] 00:07:41.478 }' 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.478 16:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.046 16:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:42.046 16:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:42.046 [2024-12-12 16:04:08.253453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.053 "name": "raid_bdev1", 00:07:43.053 "uuid": "8cc66dc5-c1e0-45d6-8235-ba20154196fd", 00:07:43.053 "strip_size_kb": 64, 00:07:43.053 "state": "online", 00:07:43.053 "raid_level": "raid0", 00:07:43.053 "superblock": true, 00:07:43.053 "num_base_bdevs": 2, 00:07:43.053 "num_base_bdevs_discovered": 2, 00:07:43.053 "num_base_bdevs_operational": 2, 00:07:43.053 "base_bdevs_list": [ 00:07:43.053 { 00:07:43.053 "name": "BaseBdev1", 00:07:43.053 "uuid": "c23f53c5-885a-50a1-96eb-7983d6d8f636", 00:07:43.053 "is_configured": true, 00:07:43.053 "data_offset": 2048, 00:07:43.053 "data_size": 63488 00:07:43.053 }, 00:07:43.053 { 00:07:43.053 "name": "BaseBdev2", 00:07:43.053 "uuid": "6ab59a35-f237-5bfd-9523-65d29dd3d977", 00:07:43.053 "is_configured": true, 00:07:43.053 "data_offset": 2048, 00:07:43.053 "data_size": 63488 00:07:43.053 } 00:07:43.053 ] 00:07:43.053 }' 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.053 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.313 [2024-12-12 16:04:09.636115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.313 [2024-12-12 16:04:09.636219] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.313 [2024-12-12 16:04:09.639610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.313 { 00:07:43.313 "results": [ 00:07:43.313 { 00:07:43.313 "job": "raid_bdev1", 00:07:43.313 "core_mask": "0x1", 00:07:43.313 "workload": "randrw", 00:07:43.313 "percentage": 50, 00:07:43.313 "status": "finished", 00:07:43.313 "queue_depth": 1, 00:07:43.313 "io_size": 131072, 00:07:43.313 "runtime": 1.382977, 00:07:43.313 "iops": 11814.368568674678, 00:07:43.313 "mibps": 1476.7960710843347, 00:07:43.313 "io_failed": 1, 00:07:43.313 "io_timeout": 0, 00:07:43.313 "avg_latency_us": 117.99540976947293, 00:07:43.313 "min_latency_us": 32.19563318777293, 00:07:43.313 "max_latency_us": 1709.9458515283843 00:07:43.313 } 00:07:43.313 ], 00:07:43.313 "core_count": 1 00:07:43.313 } 00:07:43.313 [2024-12-12 16:04:09.639727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.313 [2024-12-12 16:04:09.639781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.313 [2024-12-12 16:04:09.639797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63417 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63417 ']' 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63417 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.313 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63417 00:07:43.573 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.573 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.573 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63417' 00:07:43.573 killing process with pid 63417 00:07:43.573 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63417 00:07:43.573 16:04:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63417 00:07:43.573 [2024-12-12 16:04:09.690812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.573 [2024-12-12 16:04:09.870218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ze8LzTaqIf 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:45.476 ************************************ 00:07:45.476 END TEST raid_read_error_test 00:07:45.476 ************************************ 00:07:45.476 00:07:45.476 real 0m4.944s 00:07:45.476 user 0m5.813s 00:07:45.476 sys 0m0.687s 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.476 16:04:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.476 16:04:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:45.476 16:04:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.476 16:04:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.476 16:04:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.476 ************************************ 00:07:45.476 START TEST raid_write_error_test 00:07:45.476 ************************************ 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zZWVNZ9Ecn 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63564 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63564 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63564 ']' 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.476 16:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.476 [2024-12-12 16:04:11.634505] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:45.477 [2024-12-12 16:04:11.634663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63564 ] 00:07:45.477 [2024-12-12 16:04:11.818780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.736 [2024-12-12 16:04:11.970878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.995 [2024-12-12 16:04:12.239436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.995 [2024-12-12 16:04:12.239640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.254 BaseBdev1_malloc 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.254 true 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.254 [2024-12-12 16:04:12.581035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.254 [2024-12-12 16:04:12.581096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.254 [2024-12-12 16:04:12.581117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:46.254 [2024-12-12 16:04:12.581129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.254 [2024-12-12 16:04:12.583736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.254 [2024-12-12 16:04:12.583844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.254 BaseBdev1 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.254 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 BaseBdev2_malloc 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 true 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 [2024-12-12 16:04:12.646787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.514 [2024-12-12 16:04:12.646849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.514 [2024-12-12 16:04:12.646870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:46.514 [2024-12-12 16:04:12.646884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.514 [2024-12-12 16:04:12.649703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.514 [2024-12-12 16:04:12.649744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.514 BaseBdev2 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 [2024-12-12 16:04:12.654828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.514 [2024-12-12 16:04:12.657333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.514 [2024-12-12 16:04:12.657566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.514 [2024-12-12 16:04:12.657587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.514 [2024-12-12 16:04:12.657866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:46.514 [2024-12-12 16:04:12.658116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.514 [2024-12-12 16:04:12.658133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:46.514 [2024-12-12 16:04:12.658313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.514 "name": "raid_bdev1", 00:07:46.514 "uuid": "7ee611d3-0f39-49ec-b2ca-5d5bb0b856db", 00:07:46.514 "strip_size_kb": 64, 00:07:46.514 "state": "online", 00:07:46.514 "raid_level": "raid0", 00:07:46.514 "superblock": true, 00:07:46.514 "num_base_bdevs": 2, 00:07:46.514 "num_base_bdevs_discovered": 2, 00:07:46.514 "num_base_bdevs_operational": 2, 00:07:46.514 "base_bdevs_list": [ 00:07:46.514 { 00:07:46.514 "name": "BaseBdev1", 00:07:46.514 "uuid": "430158e3-c11f-51f2-8783-ca4b86c3694e", 00:07:46.514 "is_configured": true, 00:07:46.514 "data_offset": 2048, 00:07:46.514 "data_size": 63488 00:07:46.514 }, 00:07:46.514 { 00:07:46.514 "name": "BaseBdev2", 00:07:46.514 "uuid": "18cda7fc-5995-5432-a86b-04396000d81d", 00:07:46.514 "is_configured": true, 00:07:46.514 "data_offset": 2048, 00:07:46.514 "data_size": 63488 00:07:46.514 } 00:07:46.514 ] 00:07:46.514 }' 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.514 16:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.774 16:04:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:46.774 16:04:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.033 [2024-12-12 16:04:13.183614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:47.972 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.973 "name": "raid_bdev1", 00:07:47.973 "uuid": "7ee611d3-0f39-49ec-b2ca-5d5bb0b856db", 00:07:47.973 "strip_size_kb": 64, 00:07:47.973 "state": "online", 00:07:47.973 "raid_level": "raid0", 00:07:47.973 "superblock": true, 00:07:47.973 "num_base_bdevs": 2, 00:07:47.973 "num_base_bdevs_discovered": 2, 00:07:47.973 "num_base_bdevs_operational": 2, 00:07:47.973 "base_bdevs_list": [ 00:07:47.973 { 00:07:47.973 "name": "BaseBdev1", 00:07:47.973 "uuid": "430158e3-c11f-51f2-8783-ca4b86c3694e", 00:07:47.973 "is_configured": true, 00:07:47.973 "data_offset": 2048, 00:07:47.973 "data_size": 63488 00:07:47.973 }, 00:07:47.973 { 00:07:47.973 "name": "BaseBdev2", 00:07:47.973 "uuid": "18cda7fc-5995-5432-a86b-04396000d81d", 00:07:47.973 "is_configured": true, 00:07:47.973 "data_offset": 2048, 00:07:47.973 "data_size": 63488 00:07:47.973 } 00:07:47.973 ] 00:07:47.973 }' 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.973 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.231 [2024-12-12 16:04:14.518939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.231 [2024-12-12 16:04:14.518991] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.231 [2024-12-12 16:04:14.522516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.231 [2024-12-12 16:04:14.522591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.231 [2024-12-12 16:04:14.522638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.231 [2024-12-12 16:04:14.522654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:48.231 { 00:07:48.231 "results": [ 00:07:48.231 { 00:07:48.231 "job": "raid_bdev1", 00:07:48.231 "core_mask": "0x1", 00:07:48.231 "workload": "randrw", 00:07:48.231 "percentage": 50, 00:07:48.231 "status": "finished", 00:07:48.231 "queue_depth": 1, 00:07:48.231 "io_size": 131072, 00:07:48.231 "runtime": 1.335484, 00:07:48.231 "iops": 10950.337106247622, 00:07:48.231 "mibps": 1368.7921382809527, 00:07:48.231 "io_failed": 1, 00:07:48.231 "io_timeout": 0, 00:07:48.231 "avg_latency_us": 127.83253990221326, 00:07:48.231 "min_latency_us": 28.618340611353712, 00:07:48.231 "max_latency_us": 1738.564192139738 00:07:48.231 } 00:07:48.231 ], 00:07:48.231 "core_count": 1 00:07:48.231 } 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63564 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63564 ']' 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63564 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63564 00:07:48.231 killing process with pid 63564 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63564' 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63564 00:07:48.231 16:04:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63564 00:07:48.231 [2024-12-12 16:04:14.558536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.488 [2024-12-12 16:04:14.720115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.941 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zZWVNZ9Ecn 00:07:49.941 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.941 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.941 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:49.941 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:49.941 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.941 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.942 16:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:49.942 00:07:49.942 real 0m4.610s 00:07:49.942 user 0m5.388s 00:07:49.942 sys 0m0.650s 00:07:49.942 ************************************ 00:07:49.942 END TEST raid_write_error_test 00:07:49.942 ************************************ 00:07:49.942 16:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.942 16:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.942 16:04:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:49.942 16:04:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:49.942 16:04:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.942 16:04:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.942 16:04:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.942 ************************************ 00:07:49.942 START TEST raid_state_function_test 00:07:49.942 ************************************ 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63712 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63712' 00:07:49.942 Process raid pid: 63712 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63712 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63712 ']' 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.942 16:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.942 [2024-12-12 16:04:16.281818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:49.942 [2024-12-12 16:04:16.282201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.199 [2024-12-12 16:04:16.459969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.457 [2024-12-12 16:04:16.631303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.715 [2024-12-12 16:04:16.895994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.715 [2024-12-12 16:04:16.896218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.973 [2024-12-12 16:04:17.231909] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.973 [2024-12-12 16:04:17.232059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.973 [2024-12-12 16:04:17.232103] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.973 [2024-12-12 16:04:17.232122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.973 "name": "Existed_Raid", 00:07:50.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.973 "strip_size_kb": 64, 00:07:50.973 "state": "configuring", 00:07:50.973 "raid_level": "concat", 00:07:50.973 "superblock": false, 00:07:50.973 "num_base_bdevs": 2, 00:07:50.973 "num_base_bdevs_discovered": 0, 00:07:50.973 "num_base_bdevs_operational": 2, 00:07:50.973 "base_bdevs_list": [ 00:07:50.973 { 00:07:50.973 "name": "BaseBdev1", 00:07:50.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.973 "is_configured": false, 00:07:50.973 "data_offset": 0, 00:07:50.973 "data_size": 0 00:07:50.973 }, 00:07:50.973 { 00:07:50.973 "name": "BaseBdev2", 00:07:50.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.973 "is_configured": false, 00:07:50.973 "data_offset": 0, 00:07:50.973 "data_size": 0 00:07:50.973 } 00:07:50.973 ] 00:07:50.973 }' 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.973 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 [2024-12-12 16:04:17.631143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.540 [2024-12-12 16:04:17.631206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 [2024-12-12 16:04:17.639113] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.540 [2024-12-12 16:04:17.639171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.540 [2024-12-12 16:04:17.639182] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.540 [2024-12-12 16:04:17.639197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 [2024-12-12 16:04:17.694377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.540 BaseBdev1 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 [ 00:07:51.540 { 00:07:51.540 "name": "BaseBdev1", 00:07:51.540 "aliases": [ 00:07:51.540 "6ab84994-2eaf-495e-95c3-fc3bfc3b5c8d" 00:07:51.540 ], 00:07:51.540 "product_name": "Malloc disk", 00:07:51.540 "block_size": 512, 00:07:51.540 "num_blocks": 65536, 00:07:51.540 "uuid": "6ab84994-2eaf-495e-95c3-fc3bfc3b5c8d", 00:07:51.540 "assigned_rate_limits": { 00:07:51.540 "rw_ios_per_sec": 0, 00:07:51.540 "rw_mbytes_per_sec": 0, 00:07:51.540 "r_mbytes_per_sec": 0, 00:07:51.540 "w_mbytes_per_sec": 0 00:07:51.540 }, 00:07:51.540 "claimed": true, 00:07:51.540 "claim_type": "exclusive_write", 00:07:51.540 "zoned": false, 00:07:51.540 "supported_io_types": { 00:07:51.540 "read": true, 00:07:51.540 "write": true, 00:07:51.540 "unmap": true, 00:07:51.540 "flush": true, 00:07:51.540 "reset": true, 00:07:51.540 "nvme_admin": false, 00:07:51.540 "nvme_io": false, 00:07:51.540 "nvme_io_md": false, 00:07:51.540 "write_zeroes": true, 00:07:51.540 "zcopy": true, 00:07:51.540 "get_zone_info": false, 00:07:51.540 "zone_management": false, 00:07:51.540 "zone_append": false, 00:07:51.540 "compare": false, 00:07:51.540 "compare_and_write": false, 00:07:51.540 "abort": true, 00:07:51.540 "seek_hole": false, 00:07:51.540 "seek_data": false, 00:07:51.540 "copy": true, 00:07:51.540 "nvme_iov_md": false 00:07:51.540 }, 00:07:51.540 "memory_domains": [ 00:07:51.540 { 00:07:51.540 "dma_device_id": "system", 00:07:51.540 "dma_device_type": 1 00:07:51.540 }, 00:07:51.540 { 00:07:51.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.540 "dma_device_type": 2 00:07:51.540 } 00:07:51.540 ], 00:07:51.540 "driver_specific": {} 00:07:51.540 } 00:07:51.540 ] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.540 "name": "Existed_Raid", 00:07:51.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.540 "strip_size_kb": 64, 00:07:51.540 "state": "configuring", 00:07:51.540 "raid_level": "concat", 00:07:51.540 "superblock": false, 00:07:51.540 "num_base_bdevs": 2, 00:07:51.540 "num_base_bdevs_discovered": 1, 00:07:51.540 "num_base_bdevs_operational": 2, 00:07:51.540 "base_bdevs_list": [ 00:07:51.540 { 00:07:51.540 "name": "BaseBdev1", 00:07:51.540 "uuid": "6ab84994-2eaf-495e-95c3-fc3bfc3b5c8d", 00:07:51.540 "is_configured": true, 00:07:51.540 "data_offset": 0, 00:07:51.540 "data_size": 65536 00:07:51.540 }, 00:07:51.540 { 00:07:51.540 "name": "BaseBdev2", 00:07:51.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.540 "is_configured": false, 00:07:51.540 "data_offset": 0, 00:07:51.540 "data_size": 0 00:07:51.540 } 00:07:51.540 ] 00:07:51.540 }' 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.540 16:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.799 [2024-12-12 16:04:18.113782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.799 [2024-12-12 16:04:18.113866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.799 [2024-12-12 16:04:18.121765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.799 [2024-12-12 16:04:18.123937] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.799 [2024-12-12 16:04:18.124017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.799 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.800 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.800 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.800 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.800 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.059 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.059 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.059 "name": "Existed_Raid", 00:07:52.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.059 "strip_size_kb": 64, 00:07:52.059 "state": "configuring", 00:07:52.059 "raid_level": "concat", 00:07:52.059 "superblock": false, 00:07:52.059 "num_base_bdevs": 2, 00:07:52.059 "num_base_bdevs_discovered": 1, 00:07:52.059 "num_base_bdevs_operational": 2, 00:07:52.059 "base_bdevs_list": [ 00:07:52.059 { 00:07:52.059 "name": "BaseBdev1", 00:07:52.059 "uuid": "6ab84994-2eaf-495e-95c3-fc3bfc3b5c8d", 00:07:52.059 "is_configured": true, 00:07:52.059 "data_offset": 0, 00:07:52.059 "data_size": 65536 00:07:52.059 }, 00:07:52.059 { 00:07:52.059 "name": "BaseBdev2", 00:07:52.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.059 "is_configured": false, 00:07:52.059 "data_offset": 0, 00:07:52.059 "data_size": 0 00:07:52.059 } 00:07:52.059 ] 00:07:52.059 }' 00:07:52.059 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.059 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.318 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.318 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.318 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.318 [2024-12-12 16:04:18.614509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.318 [2024-12-12 16:04:18.614651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.318 [2024-12-12 16:04:18.614664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:52.318 [2024-12-12 16:04:18.614985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.318 [2024-12-12 16:04:18.615187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.318 [2024-12-12 16:04:18.615202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.319 [2024-12-12 16:04:18.615496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.319 BaseBdev2 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 [ 00:07:52.319 { 00:07:52.319 "name": "BaseBdev2", 00:07:52.319 "aliases": [ 00:07:52.319 "25355908-f37d-483f-a8d2-d2f9f7c7b7e6" 00:07:52.319 ], 00:07:52.319 "product_name": "Malloc disk", 00:07:52.319 "block_size": 512, 00:07:52.319 "num_blocks": 65536, 00:07:52.319 "uuid": "25355908-f37d-483f-a8d2-d2f9f7c7b7e6", 00:07:52.319 "assigned_rate_limits": { 00:07:52.319 "rw_ios_per_sec": 0, 00:07:52.319 "rw_mbytes_per_sec": 0, 00:07:52.319 "r_mbytes_per_sec": 0, 00:07:52.319 "w_mbytes_per_sec": 0 00:07:52.319 }, 00:07:52.319 "claimed": true, 00:07:52.319 "claim_type": "exclusive_write", 00:07:52.319 "zoned": false, 00:07:52.319 "supported_io_types": { 00:07:52.319 "read": true, 00:07:52.319 "write": true, 00:07:52.319 "unmap": true, 00:07:52.319 "flush": true, 00:07:52.319 "reset": true, 00:07:52.319 "nvme_admin": false, 00:07:52.319 "nvme_io": false, 00:07:52.319 "nvme_io_md": false, 00:07:52.319 "write_zeroes": true, 00:07:52.319 "zcopy": true, 00:07:52.319 "get_zone_info": false, 00:07:52.319 "zone_management": false, 00:07:52.319 "zone_append": false, 00:07:52.319 "compare": false, 00:07:52.319 "compare_and_write": false, 00:07:52.319 "abort": true, 00:07:52.319 "seek_hole": false, 00:07:52.319 "seek_data": false, 00:07:52.319 "copy": true, 00:07:52.319 "nvme_iov_md": false 00:07:52.319 }, 00:07:52.319 "memory_domains": [ 00:07:52.319 { 00:07:52.319 "dma_device_id": "system", 00:07:52.319 "dma_device_type": 1 00:07:52.319 }, 00:07:52.319 { 00:07:52.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.319 "dma_device_type": 2 00:07:52.319 } 00:07:52.319 ], 00:07:52.319 "driver_specific": {} 00:07:52.319 } 00:07:52.319 ] 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.319 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.653 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.653 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.653 "name": "Existed_Raid", 00:07:52.653 "uuid": "d1ab4a88-8ca8-45da-8bcc-db50a57907f3", 00:07:52.653 "strip_size_kb": 64, 00:07:52.653 "state": "online", 00:07:52.653 "raid_level": "concat", 00:07:52.653 "superblock": false, 00:07:52.653 "num_base_bdevs": 2, 00:07:52.653 "num_base_bdevs_discovered": 2, 00:07:52.653 "num_base_bdevs_operational": 2, 00:07:52.653 "base_bdevs_list": [ 00:07:52.653 { 00:07:52.653 "name": "BaseBdev1", 00:07:52.653 "uuid": "6ab84994-2eaf-495e-95c3-fc3bfc3b5c8d", 00:07:52.653 "is_configured": true, 00:07:52.653 "data_offset": 0, 00:07:52.653 "data_size": 65536 00:07:52.653 }, 00:07:52.653 { 00:07:52.653 "name": "BaseBdev2", 00:07:52.653 "uuid": "25355908-f37d-483f-a8d2-d2f9f7c7b7e6", 00:07:52.653 "is_configured": true, 00:07:52.653 "data_offset": 0, 00:07:52.653 "data_size": 65536 00:07:52.653 } 00:07:52.653 ] 00:07:52.653 }' 00:07:52.653 16:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.653 16:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.912 [2024-12-12 16:04:19.086090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.912 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.912 "name": "Existed_Raid", 00:07:52.912 "aliases": [ 00:07:52.912 "d1ab4a88-8ca8-45da-8bcc-db50a57907f3" 00:07:52.912 ], 00:07:52.912 "product_name": "Raid Volume", 00:07:52.912 "block_size": 512, 00:07:52.912 "num_blocks": 131072, 00:07:52.912 "uuid": "d1ab4a88-8ca8-45da-8bcc-db50a57907f3", 00:07:52.912 "assigned_rate_limits": { 00:07:52.912 "rw_ios_per_sec": 0, 00:07:52.912 "rw_mbytes_per_sec": 0, 00:07:52.912 "r_mbytes_per_sec": 0, 00:07:52.912 "w_mbytes_per_sec": 0 00:07:52.912 }, 00:07:52.912 "claimed": false, 00:07:52.912 "zoned": false, 00:07:52.912 "supported_io_types": { 00:07:52.912 "read": true, 00:07:52.912 "write": true, 00:07:52.912 "unmap": true, 00:07:52.912 "flush": true, 00:07:52.912 "reset": true, 00:07:52.912 "nvme_admin": false, 00:07:52.912 "nvme_io": false, 00:07:52.912 "nvme_io_md": false, 00:07:52.912 "write_zeroes": true, 00:07:52.912 "zcopy": false, 00:07:52.912 "get_zone_info": false, 00:07:52.912 "zone_management": false, 00:07:52.912 "zone_append": false, 00:07:52.912 "compare": false, 00:07:52.912 "compare_and_write": false, 00:07:52.912 "abort": false, 00:07:52.912 "seek_hole": false, 00:07:52.912 "seek_data": false, 00:07:52.912 "copy": false, 00:07:52.912 "nvme_iov_md": false 00:07:52.912 }, 00:07:52.912 "memory_domains": [ 00:07:52.912 { 00:07:52.912 "dma_device_id": "system", 00:07:52.912 "dma_device_type": 1 00:07:52.912 }, 00:07:52.912 { 00:07:52.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.912 "dma_device_type": 2 00:07:52.912 }, 00:07:52.912 { 00:07:52.912 "dma_device_id": "system", 00:07:52.912 "dma_device_type": 1 00:07:52.912 }, 00:07:52.912 { 00:07:52.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.912 "dma_device_type": 2 00:07:52.912 } 00:07:52.912 ], 00:07:52.912 "driver_specific": { 00:07:52.912 "raid": { 00:07:52.912 "uuid": "d1ab4a88-8ca8-45da-8bcc-db50a57907f3", 00:07:52.912 "strip_size_kb": 64, 00:07:52.912 "state": "online", 00:07:52.912 "raid_level": "concat", 00:07:52.913 "superblock": false, 00:07:52.913 "num_base_bdevs": 2, 00:07:52.913 "num_base_bdevs_discovered": 2, 00:07:52.913 "num_base_bdevs_operational": 2, 00:07:52.913 "base_bdevs_list": [ 00:07:52.913 { 00:07:52.913 "name": "BaseBdev1", 00:07:52.913 "uuid": "6ab84994-2eaf-495e-95c3-fc3bfc3b5c8d", 00:07:52.913 "is_configured": true, 00:07:52.913 "data_offset": 0, 00:07:52.913 "data_size": 65536 00:07:52.913 }, 00:07:52.913 { 00:07:52.913 "name": "BaseBdev2", 00:07:52.913 "uuid": "25355908-f37d-483f-a8d2-d2f9f7c7b7e6", 00:07:52.913 "is_configured": true, 00:07:52.913 "data_offset": 0, 00:07:52.913 "data_size": 65536 00:07:52.913 } 00:07:52.913 ] 00:07:52.913 } 00:07:52.913 } 00:07:52.913 }' 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.913 BaseBdev2' 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.913 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.172 [2024-12-12 16:04:19.321430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.172 [2024-12-12 16:04:19.321549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.172 [2024-12-12 16:04:19.321639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.172 "name": "Existed_Raid", 00:07:53.172 "uuid": "d1ab4a88-8ca8-45da-8bcc-db50a57907f3", 00:07:53.172 "strip_size_kb": 64, 00:07:53.172 "state": "offline", 00:07:53.172 "raid_level": "concat", 00:07:53.172 "superblock": false, 00:07:53.172 "num_base_bdevs": 2, 00:07:53.172 "num_base_bdevs_discovered": 1, 00:07:53.172 "num_base_bdevs_operational": 1, 00:07:53.172 "base_bdevs_list": [ 00:07:53.172 { 00:07:53.172 "name": null, 00:07:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.172 "is_configured": false, 00:07:53.172 "data_offset": 0, 00:07:53.172 "data_size": 65536 00:07:53.172 }, 00:07:53.172 { 00:07:53.172 "name": "BaseBdev2", 00:07:53.172 "uuid": "25355908-f37d-483f-a8d2-d2f9f7c7b7e6", 00:07:53.172 "is_configured": true, 00:07:53.172 "data_offset": 0, 00:07:53.172 "data_size": 65536 00:07:53.172 } 00:07:53.172 ] 00:07:53.172 }' 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.172 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.741 16:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.741 [2024-12-12 16:04:19.921603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.741 [2024-12-12 16:04:19.921786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63712 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63712 ']' 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63712 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:53.741 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.001 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63712 00:07:54.001 killing process with pid 63712 00:07:54.001 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.001 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.001 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63712' 00:07:54.001 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63712 00:07:54.001 [2024-12-12 16:04:20.112658] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.001 16:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63712 00:07:54.001 [2024-12-12 16:04:20.131355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.382 00:07:55.382 real 0m5.210s 00:07:55.382 user 0m7.336s 00:07:55.382 sys 0m0.915s 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 ************************************ 00:07:55.382 END TEST raid_state_function_test 00:07:55.382 ************************************ 00:07:55.382 16:04:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:55.382 16:04:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.382 16:04:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.382 16:04:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 ************************************ 00:07:55.382 START TEST raid_state_function_test_sb 00:07:55.382 ************************************ 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63965 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.382 Process raid pid: 63965 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63965' 00:07:55.382 16:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63965 00:07:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.383 16:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63965 ']' 00:07:55.383 16:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.383 16:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.383 16:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.383 16:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.383 16:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.383 [2024-12-12 16:04:21.559230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:55.383 [2024-12-12 16:04:21.559363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.642 [2024-12-12 16:04:21.738758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.642 [2024-12-12 16:04:21.885027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.901 [2024-12-12 16:04:22.127194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.901 [2024-12-12 16:04:22.127373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.159 [2024-12-12 16:04:22.415601] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.159 [2024-12-12 16:04:22.415803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.159 [2024-12-12 16:04:22.415818] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.159 [2024-12-12 16:04:22.415830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.159 "name": "Existed_Raid", 00:07:56.159 "uuid": "2f20df7f-1e7a-487f-bae4-f68cf79cb7d4", 00:07:56.159 "strip_size_kb": 64, 00:07:56.159 "state": "configuring", 00:07:56.159 "raid_level": "concat", 00:07:56.159 "superblock": true, 00:07:56.159 "num_base_bdevs": 2, 00:07:56.159 "num_base_bdevs_discovered": 0, 00:07:56.159 "num_base_bdevs_operational": 2, 00:07:56.159 "base_bdevs_list": [ 00:07:56.159 { 00:07:56.159 "name": "BaseBdev1", 00:07:56.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.159 "is_configured": false, 00:07:56.159 "data_offset": 0, 00:07:56.159 "data_size": 0 00:07:56.159 }, 00:07:56.159 { 00:07:56.159 "name": "BaseBdev2", 00:07:56.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.159 "is_configured": false, 00:07:56.159 "data_offset": 0, 00:07:56.159 "data_size": 0 00:07:56.159 } 00:07:56.159 ] 00:07:56.159 }' 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.159 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.724 [2024-12-12 16:04:22.847147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.724 [2024-12-12 16:04:22.847302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.724 [2024-12-12 16:04:22.855126] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.724 [2024-12-12 16:04:22.855237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.724 [2024-12-12 16:04:22.855280] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.724 [2024-12-12 16:04:22.855328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.724 [2024-12-12 16:04:22.909324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.724 BaseBdev1 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.724 [ 00:07:56.724 { 00:07:56.724 "name": "BaseBdev1", 00:07:56.724 "aliases": [ 00:07:56.724 "07e4a981-0cf5-47c9-9f55-b80ecefa3bda" 00:07:56.724 ], 00:07:56.724 "product_name": "Malloc disk", 00:07:56.724 "block_size": 512, 00:07:56.724 "num_blocks": 65536, 00:07:56.724 "uuid": "07e4a981-0cf5-47c9-9f55-b80ecefa3bda", 00:07:56.724 "assigned_rate_limits": { 00:07:56.724 "rw_ios_per_sec": 0, 00:07:56.724 "rw_mbytes_per_sec": 0, 00:07:56.724 "r_mbytes_per_sec": 0, 00:07:56.724 "w_mbytes_per_sec": 0 00:07:56.724 }, 00:07:56.724 "claimed": true, 00:07:56.724 "claim_type": "exclusive_write", 00:07:56.724 "zoned": false, 00:07:56.724 "supported_io_types": { 00:07:56.724 "read": true, 00:07:56.724 "write": true, 00:07:56.724 "unmap": true, 00:07:56.724 "flush": true, 00:07:56.724 "reset": true, 00:07:56.724 "nvme_admin": false, 00:07:56.724 "nvme_io": false, 00:07:56.724 "nvme_io_md": false, 00:07:56.724 "write_zeroes": true, 00:07:56.724 "zcopy": true, 00:07:56.724 "get_zone_info": false, 00:07:56.724 "zone_management": false, 00:07:56.724 "zone_append": false, 00:07:56.724 "compare": false, 00:07:56.724 "compare_and_write": false, 00:07:56.724 "abort": true, 00:07:56.724 "seek_hole": false, 00:07:56.724 "seek_data": false, 00:07:56.724 "copy": true, 00:07:56.724 "nvme_iov_md": false 00:07:56.724 }, 00:07:56.724 "memory_domains": [ 00:07:56.724 { 00:07:56.724 "dma_device_id": "system", 00:07:56.724 "dma_device_type": 1 00:07:56.724 }, 00:07:56.724 { 00:07:56.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.724 "dma_device_type": 2 00:07:56.724 } 00:07:56.724 ], 00:07:56.724 "driver_specific": {} 00:07:56.724 } 00:07:56.724 ] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.724 "name": "Existed_Raid", 00:07:56.724 "uuid": "625e31b8-34ff-40df-be36-58ab9f28f437", 00:07:56.724 "strip_size_kb": 64, 00:07:56.724 "state": "configuring", 00:07:56.724 "raid_level": "concat", 00:07:56.724 "superblock": true, 00:07:56.724 "num_base_bdevs": 2, 00:07:56.724 "num_base_bdevs_discovered": 1, 00:07:56.724 "num_base_bdevs_operational": 2, 00:07:56.724 "base_bdevs_list": [ 00:07:56.724 { 00:07:56.724 "name": "BaseBdev1", 00:07:56.724 "uuid": "07e4a981-0cf5-47c9-9f55-b80ecefa3bda", 00:07:56.724 "is_configured": true, 00:07:56.724 "data_offset": 2048, 00:07:56.724 "data_size": 63488 00:07:56.724 }, 00:07:56.724 { 00:07:56.724 "name": "BaseBdev2", 00:07:56.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.724 "is_configured": false, 00:07:56.724 "data_offset": 0, 00:07:56.724 "data_size": 0 00:07:56.724 } 00:07:56.724 ] 00:07:56.724 }' 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.724 16:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.981 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.981 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.981 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.981 [2024-12-12 16:04:23.321107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.981 [2024-12-12 16:04:23.321286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.240 [2024-12-12 16:04:23.343990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.240 [2024-12-12 16:04:23.346793] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.240 [2024-12-12 16:04:23.346915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.240 "name": "Existed_Raid", 00:07:57.240 "uuid": "ca974e9a-af72-4a24-b974-e6c42d131685", 00:07:57.240 "strip_size_kb": 64, 00:07:57.240 "state": "configuring", 00:07:57.240 "raid_level": "concat", 00:07:57.240 "superblock": true, 00:07:57.240 "num_base_bdevs": 2, 00:07:57.240 "num_base_bdevs_discovered": 1, 00:07:57.240 "num_base_bdevs_operational": 2, 00:07:57.240 "base_bdevs_list": [ 00:07:57.240 { 00:07:57.240 "name": "BaseBdev1", 00:07:57.240 "uuid": "07e4a981-0cf5-47c9-9f55-b80ecefa3bda", 00:07:57.240 "is_configured": true, 00:07:57.240 "data_offset": 2048, 00:07:57.240 "data_size": 63488 00:07:57.240 }, 00:07:57.240 { 00:07:57.240 "name": "BaseBdev2", 00:07:57.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.240 "is_configured": false, 00:07:57.240 "data_offset": 0, 00:07:57.240 "data_size": 0 00:07:57.240 } 00:07:57.240 ] 00:07:57.240 }' 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.240 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.500 [2024-12-12 16:04:23.788071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.500 [2024-12-12 16:04:23.788386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.500 [2024-12-12 16:04:23.788403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.500 [2024-12-12 16:04:23.788684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:57.500 BaseBdev2 00:07:57.500 [2024-12-12 16:04:23.788861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.500 [2024-12-12 16:04:23.788881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:57.500 [2024-12-12 16:04:23.789052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.500 [ 00:07:57.500 { 00:07:57.500 "name": "BaseBdev2", 00:07:57.500 "aliases": [ 00:07:57.500 "83fd910e-88d9-4aac-8486-3511c14b9256" 00:07:57.500 ], 00:07:57.500 "product_name": "Malloc disk", 00:07:57.500 "block_size": 512, 00:07:57.500 "num_blocks": 65536, 00:07:57.500 "uuid": "83fd910e-88d9-4aac-8486-3511c14b9256", 00:07:57.500 "assigned_rate_limits": { 00:07:57.500 "rw_ios_per_sec": 0, 00:07:57.500 "rw_mbytes_per_sec": 0, 00:07:57.500 "r_mbytes_per_sec": 0, 00:07:57.500 "w_mbytes_per_sec": 0 00:07:57.500 }, 00:07:57.500 "claimed": true, 00:07:57.500 "claim_type": "exclusive_write", 00:07:57.500 "zoned": false, 00:07:57.500 "supported_io_types": { 00:07:57.500 "read": true, 00:07:57.500 "write": true, 00:07:57.500 "unmap": true, 00:07:57.500 "flush": true, 00:07:57.500 "reset": true, 00:07:57.500 "nvme_admin": false, 00:07:57.500 "nvme_io": false, 00:07:57.500 "nvme_io_md": false, 00:07:57.500 "write_zeroes": true, 00:07:57.500 "zcopy": true, 00:07:57.500 "get_zone_info": false, 00:07:57.500 "zone_management": false, 00:07:57.500 "zone_append": false, 00:07:57.500 "compare": false, 00:07:57.500 "compare_and_write": false, 00:07:57.500 "abort": true, 00:07:57.500 "seek_hole": false, 00:07:57.500 "seek_data": false, 00:07:57.500 "copy": true, 00:07:57.500 "nvme_iov_md": false 00:07:57.500 }, 00:07:57.500 "memory_domains": [ 00:07:57.500 { 00:07:57.500 "dma_device_id": "system", 00:07:57.500 "dma_device_type": 1 00:07:57.500 }, 00:07:57.500 { 00:07:57.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.500 "dma_device_type": 2 00:07:57.500 } 00:07:57.500 ], 00:07:57.500 "driver_specific": {} 00:07:57.500 } 00:07:57.500 ] 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.500 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.760 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.760 "name": "Existed_Raid", 00:07:57.760 "uuid": "ca974e9a-af72-4a24-b974-e6c42d131685", 00:07:57.760 "strip_size_kb": 64, 00:07:57.760 "state": "online", 00:07:57.760 "raid_level": "concat", 00:07:57.760 "superblock": true, 00:07:57.760 "num_base_bdevs": 2, 00:07:57.760 "num_base_bdevs_discovered": 2, 00:07:57.760 "num_base_bdevs_operational": 2, 00:07:57.760 "base_bdevs_list": [ 00:07:57.760 { 00:07:57.760 "name": "BaseBdev1", 00:07:57.760 "uuid": "07e4a981-0cf5-47c9-9f55-b80ecefa3bda", 00:07:57.760 "is_configured": true, 00:07:57.760 "data_offset": 2048, 00:07:57.760 "data_size": 63488 00:07:57.760 }, 00:07:57.760 { 00:07:57.760 "name": "BaseBdev2", 00:07:57.760 "uuid": "83fd910e-88d9-4aac-8486-3511c14b9256", 00:07:57.760 "is_configured": true, 00:07:57.760 "data_offset": 2048, 00:07:57.760 "data_size": 63488 00:07:57.760 } 00:07:57.760 ] 00:07:57.760 }' 00:07:57.760 16:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.760 16:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.019 [2024-12-12 16:04:24.279661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.019 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.019 "name": "Existed_Raid", 00:07:58.019 "aliases": [ 00:07:58.019 "ca974e9a-af72-4a24-b974-e6c42d131685" 00:07:58.019 ], 00:07:58.019 "product_name": "Raid Volume", 00:07:58.019 "block_size": 512, 00:07:58.019 "num_blocks": 126976, 00:07:58.019 "uuid": "ca974e9a-af72-4a24-b974-e6c42d131685", 00:07:58.019 "assigned_rate_limits": { 00:07:58.019 "rw_ios_per_sec": 0, 00:07:58.019 "rw_mbytes_per_sec": 0, 00:07:58.019 "r_mbytes_per_sec": 0, 00:07:58.019 "w_mbytes_per_sec": 0 00:07:58.019 }, 00:07:58.019 "claimed": false, 00:07:58.019 "zoned": false, 00:07:58.019 "supported_io_types": { 00:07:58.019 "read": true, 00:07:58.019 "write": true, 00:07:58.019 "unmap": true, 00:07:58.019 "flush": true, 00:07:58.019 "reset": true, 00:07:58.019 "nvme_admin": false, 00:07:58.019 "nvme_io": false, 00:07:58.019 "nvme_io_md": false, 00:07:58.019 "write_zeroes": true, 00:07:58.019 "zcopy": false, 00:07:58.019 "get_zone_info": false, 00:07:58.019 "zone_management": false, 00:07:58.019 "zone_append": false, 00:07:58.019 "compare": false, 00:07:58.019 "compare_and_write": false, 00:07:58.019 "abort": false, 00:07:58.019 "seek_hole": false, 00:07:58.019 "seek_data": false, 00:07:58.019 "copy": false, 00:07:58.019 "nvme_iov_md": false 00:07:58.019 }, 00:07:58.019 "memory_domains": [ 00:07:58.019 { 00:07:58.020 "dma_device_id": "system", 00:07:58.020 "dma_device_type": 1 00:07:58.020 }, 00:07:58.020 { 00:07:58.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.020 "dma_device_type": 2 00:07:58.020 }, 00:07:58.020 { 00:07:58.020 "dma_device_id": "system", 00:07:58.020 "dma_device_type": 1 00:07:58.020 }, 00:07:58.020 { 00:07:58.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.020 "dma_device_type": 2 00:07:58.020 } 00:07:58.020 ], 00:07:58.020 "driver_specific": { 00:07:58.020 "raid": { 00:07:58.020 "uuid": "ca974e9a-af72-4a24-b974-e6c42d131685", 00:07:58.020 "strip_size_kb": 64, 00:07:58.020 "state": "online", 00:07:58.020 "raid_level": "concat", 00:07:58.020 "superblock": true, 00:07:58.020 "num_base_bdevs": 2, 00:07:58.020 "num_base_bdevs_discovered": 2, 00:07:58.020 "num_base_bdevs_operational": 2, 00:07:58.020 "base_bdevs_list": [ 00:07:58.020 { 00:07:58.020 "name": "BaseBdev1", 00:07:58.020 "uuid": "07e4a981-0cf5-47c9-9f55-b80ecefa3bda", 00:07:58.020 "is_configured": true, 00:07:58.020 "data_offset": 2048, 00:07:58.020 "data_size": 63488 00:07:58.020 }, 00:07:58.020 { 00:07:58.020 "name": "BaseBdev2", 00:07:58.020 "uuid": "83fd910e-88d9-4aac-8486-3511c14b9256", 00:07:58.020 "is_configured": true, 00:07:58.020 "data_offset": 2048, 00:07:58.020 "data_size": 63488 00:07:58.020 } 00:07:58.020 ] 00:07:58.020 } 00:07:58.020 } 00:07:58.020 }' 00:07:58.020 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:58.280 BaseBdev2' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.280 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 [2024-12-12 16:04:24.550972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:58.280 [2024-12-12 16:04:24.551073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.280 [2024-12-12 16:04:24.551159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.539 "name": "Existed_Raid", 00:07:58.539 "uuid": "ca974e9a-af72-4a24-b974-e6c42d131685", 00:07:58.539 "strip_size_kb": 64, 00:07:58.539 "state": "offline", 00:07:58.539 "raid_level": "concat", 00:07:58.539 "superblock": true, 00:07:58.539 "num_base_bdevs": 2, 00:07:58.539 "num_base_bdevs_discovered": 1, 00:07:58.539 "num_base_bdevs_operational": 1, 00:07:58.539 "base_bdevs_list": [ 00:07:58.539 { 00:07:58.539 "name": null, 00:07:58.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.539 "is_configured": false, 00:07:58.539 "data_offset": 0, 00:07:58.539 "data_size": 63488 00:07:58.539 }, 00:07:58.539 { 00:07:58.539 "name": "BaseBdev2", 00:07:58.539 "uuid": "83fd910e-88d9-4aac-8486-3511c14b9256", 00:07:58.539 "is_configured": true, 00:07:58.539 "data_offset": 2048, 00:07:58.539 "data_size": 63488 00:07:58.539 } 00:07:58.539 ] 00:07:58.539 }' 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.539 16:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.798 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:58.798 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.798 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.798 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.798 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.798 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.798 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 [2024-12-12 16:04:25.153826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.057 [2024-12-12 16:04:25.153980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63965 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63965 ']' 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63965 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63965 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63965' 00:07:59.057 killing process with pid 63965 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63965 00:07:59.057 [2024-12-12 16:04:25.357552] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.057 16:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63965 00:07:59.057 [2024-12-12 16:04:25.376131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.437 16:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:00.437 00:08:00.437 real 0m5.173s 00:08:00.437 user 0m7.260s 00:08:00.437 sys 0m0.879s 00:08:00.437 16:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.437 16:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 ************************************ 00:08:00.437 END TEST raid_state_function_test_sb 00:08:00.437 ************************************ 00:08:00.437 16:04:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:00.437 16:04:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:00.437 16:04:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.437 16:04:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 ************************************ 00:08:00.437 START TEST raid_superblock_test 00:08:00.437 ************************************ 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64217 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64217 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64217 ']' 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.437 16:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.697 [2024-12-12 16:04:26.788430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:00.697 [2024-12-12 16:04:26.788640] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64217 ] 00:08:00.697 [2024-12-12 16:04:26.964750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.957 [2024-12-12 16:04:27.110381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.216 [2024-12-12 16:04:27.348538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.216 [2024-12-12 16:04:27.348616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 malloc1 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 [2024-12-12 16:04:27.715778] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.475 [2024-12-12 16:04:27.715856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.475 [2024-12-12 16:04:27.715884] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:01.475 [2024-12-12 16:04:27.715913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.475 [2024-12-12 16:04:27.718857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.475 [2024-12-12 16:04:27.718915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.475 pt1 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 malloc2 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 [2024-12-12 16:04:27.788775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:01.475 [2024-12-12 16:04:27.788926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.475 [2024-12-12 16:04:27.788992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:01.475 [2024-12-12 16:04:27.789038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.475 [2024-12-12 16:04:27.791882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.475 [2024-12-12 16:04:27.791981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:01.475 pt2 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.475 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 [2024-12-12 16:04:27.800866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.475 [2024-12-12 16:04:27.803290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.475 [2024-12-12 16:04:27.803526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:01.475 [2024-12-12 16:04:27.803577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:01.475 [2024-12-12 16:04:27.803952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:01.475 [2024-12-12 16:04:27.804196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:01.475 [2024-12-12 16:04:27.804250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:01.476 [2024-12-12 16:04:27.804492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.476 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.734 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.734 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.734 "name": "raid_bdev1", 00:08:01.734 "uuid": "b63b324d-510a-4d97-aad5-c8c4c96ad9d7", 00:08:01.734 "strip_size_kb": 64, 00:08:01.734 "state": "online", 00:08:01.734 "raid_level": "concat", 00:08:01.734 "superblock": true, 00:08:01.734 "num_base_bdevs": 2, 00:08:01.734 "num_base_bdevs_discovered": 2, 00:08:01.735 "num_base_bdevs_operational": 2, 00:08:01.735 "base_bdevs_list": [ 00:08:01.735 { 00:08:01.735 "name": "pt1", 00:08:01.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.735 "is_configured": true, 00:08:01.735 "data_offset": 2048, 00:08:01.735 "data_size": 63488 00:08:01.735 }, 00:08:01.735 { 00:08:01.735 "name": "pt2", 00:08:01.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.735 "is_configured": true, 00:08:01.735 "data_offset": 2048, 00:08:01.735 "data_size": 63488 00:08:01.735 } 00:08:01.735 ] 00:08:01.735 }' 00:08:01.735 16:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.735 16:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.993 [2024-12-12 16:04:28.292428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.993 "name": "raid_bdev1", 00:08:01.993 "aliases": [ 00:08:01.993 "b63b324d-510a-4d97-aad5-c8c4c96ad9d7" 00:08:01.993 ], 00:08:01.993 "product_name": "Raid Volume", 00:08:01.993 "block_size": 512, 00:08:01.993 "num_blocks": 126976, 00:08:01.993 "uuid": "b63b324d-510a-4d97-aad5-c8c4c96ad9d7", 00:08:01.993 "assigned_rate_limits": { 00:08:01.993 "rw_ios_per_sec": 0, 00:08:01.993 "rw_mbytes_per_sec": 0, 00:08:01.993 "r_mbytes_per_sec": 0, 00:08:01.993 "w_mbytes_per_sec": 0 00:08:01.993 }, 00:08:01.993 "claimed": false, 00:08:01.993 "zoned": false, 00:08:01.993 "supported_io_types": { 00:08:01.993 "read": true, 00:08:01.993 "write": true, 00:08:01.993 "unmap": true, 00:08:01.993 "flush": true, 00:08:01.993 "reset": true, 00:08:01.993 "nvme_admin": false, 00:08:01.993 "nvme_io": false, 00:08:01.993 "nvme_io_md": false, 00:08:01.993 "write_zeroes": true, 00:08:01.993 "zcopy": false, 00:08:01.993 "get_zone_info": false, 00:08:01.993 "zone_management": false, 00:08:01.993 "zone_append": false, 00:08:01.993 "compare": false, 00:08:01.993 "compare_and_write": false, 00:08:01.993 "abort": false, 00:08:01.993 "seek_hole": false, 00:08:01.993 "seek_data": false, 00:08:01.993 "copy": false, 00:08:01.993 "nvme_iov_md": false 00:08:01.993 }, 00:08:01.993 "memory_domains": [ 00:08:01.993 { 00:08:01.993 "dma_device_id": "system", 00:08:01.993 "dma_device_type": 1 00:08:01.993 }, 00:08:01.993 { 00:08:01.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.993 "dma_device_type": 2 00:08:01.993 }, 00:08:01.993 { 00:08:01.993 "dma_device_id": "system", 00:08:01.993 "dma_device_type": 1 00:08:01.993 }, 00:08:01.993 { 00:08:01.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.993 "dma_device_type": 2 00:08:01.993 } 00:08:01.993 ], 00:08:01.993 "driver_specific": { 00:08:01.993 "raid": { 00:08:01.993 "uuid": "b63b324d-510a-4d97-aad5-c8c4c96ad9d7", 00:08:01.993 "strip_size_kb": 64, 00:08:01.993 "state": "online", 00:08:01.993 "raid_level": "concat", 00:08:01.993 "superblock": true, 00:08:01.993 "num_base_bdevs": 2, 00:08:01.993 "num_base_bdevs_discovered": 2, 00:08:01.993 "num_base_bdevs_operational": 2, 00:08:01.993 "base_bdevs_list": [ 00:08:01.993 { 00:08:01.993 "name": "pt1", 00:08:01.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.993 "is_configured": true, 00:08:01.993 "data_offset": 2048, 00:08:01.993 "data_size": 63488 00:08:01.993 }, 00:08:01.993 { 00:08:01.993 "name": "pt2", 00:08:01.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.993 "is_configured": true, 00:08:01.993 "data_offset": 2048, 00:08:01.993 "data_size": 63488 00:08:01.993 } 00:08:01.993 ] 00:08:01.993 } 00:08:01.993 } 00:08:01.993 }' 00:08:01.993 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.253 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:02.253 pt2' 00:08:02.253 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.253 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.253 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.253 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:02.254 [2024-12-12 16:04:28.536110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b63b324d-510a-4d97-aad5-c8c4c96ad9d7 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b63b324d-510a-4d97-aad5-c8c4c96ad9d7 ']' 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.254 [2024-12-12 16:04:28.583641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.254 [2024-12-12 16:04:28.583734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.254 [2024-12-12 16:04:28.583861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.254 [2024-12-12 16:04:28.583948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.254 [2024-12-12 16:04:28.583968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:02.254 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.514 [2024-12-12 16:04:28.715457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:02.514 [2024-12-12 16:04:28.717773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:02.514 [2024-12-12 16:04:28.717852] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:02.514 [2024-12-12 16:04:28.717924] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:02.514 [2024-12-12 16:04:28.717942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.514 [2024-12-12 16:04:28.717954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:02.514 request: 00:08:02.514 { 00:08:02.514 "name": "raid_bdev1", 00:08:02.514 "raid_level": "concat", 00:08:02.514 "base_bdevs": [ 00:08:02.514 "malloc1", 00:08:02.514 "malloc2" 00:08:02.514 ], 00:08:02.514 "strip_size_kb": 64, 00:08:02.514 "superblock": false, 00:08:02.514 "method": "bdev_raid_create", 00:08:02.514 "req_id": 1 00:08:02.514 } 00:08:02.514 Got JSON-RPC error response 00:08:02.514 response: 00:08:02.514 { 00:08:02.514 "code": -17, 00:08:02.514 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:02.514 } 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.514 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.515 [2024-12-12 16:04:28.767335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.515 [2024-12-12 16:04:28.767403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.515 [2024-12-12 16:04:28.767425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:02.515 [2024-12-12 16:04:28.767437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.515 [2024-12-12 16:04:28.770068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.515 [2024-12-12 16:04:28.770103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.515 [2024-12-12 16:04:28.770200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:02.515 [2024-12-12 16:04:28.770263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.515 pt1 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.515 "name": "raid_bdev1", 00:08:02.515 "uuid": "b63b324d-510a-4d97-aad5-c8c4c96ad9d7", 00:08:02.515 "strip_size_kb": 64, 00:08:02.515 "state": "configuring", 00:08:02.515 "raid_level": "concat", 00:08:02.515 "superblock": true, 00:08:02.515 "num_base_bdevs": 2, 00:08:02.515 "num_base_bdevs_discovered": 1, 00:08:02.515 "num_base_bdevs_operational": 2, 00:08:02.515 "base_bdevs_list": [ 00:08:02.515 { 00:08:02.515 "name": "pt1", 00:08:02.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.515 "is_configured": true, 00:08:02.515 "data_offset": 2048, 00:08:02.515 "data_size": 63488 00:08:02.515 }, 00:08:02.515 { 00:08:02.515 "name": null, 00:08:02.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.515 "is_configured": false, 00:08:02.515 "data_offset": 2048, 00:08:02.515 "data_size": 63488 00:08:02.515 } 00:08:02.515 ] 00:08:02.515 }' 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.515 16:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.082 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:03.082 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:03.082 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.082 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.082 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.082 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.082 [2024-12-12 16:04:29.226645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.082 [2024-12-12 16:04:29.226757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.082 [2024-12-12 16:04:29.226795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:03.082 [2024-12-12 16:04:29.226813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.082 [2024-12-12 16:04:29.227507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.082 [2024-12-12 16:04:29.227542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.082 [2024-12-12 16:04:29.227676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:03.082 [2024-12-12 16:04:29.227719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.082 [2024-12-12 16:04:29.227909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.082 [2024-12-12 16:04:29.227927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:03.082 [2024-12-12 16:04:29.228271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:03.083 [2024-12-12 16:04:29.228464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.083 [2024-12-12 16:04:29.228475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:03.083 [2024-12-12 16:04:29.228672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.083 pt2 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.083 "name": "raid_bdev1", 00:08:03.083 "uuid": "b63b324d-510a-4d97-aad5-c8c4c96ad9d7", 00:08:03.083 "strip_size_kb": 64, 00:08:03.083 "state": "online", 00:08:03.083 "raid_level": "concat", 00:08:03.083 "superblock": true, 00:08:03.083 "num_base_bdevs": 2, 00:08:03.083 "num_base_bdevs_discovered": 2, 00:08:03.083 "num_base_bdevs_operational": 2, 00:08:03.083 "base_bdevs_list": [ 00:08:03.083 { 00:08:03.083 "name": "pt1", 00:08:03.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.083 "is_configured": true, 00:08:03.083 "data_offset": 2048, 00:08:03.083 "data_size": 63488 00:08:03.083 }, 00:08:03.083 { 00:08:03.083 "name": "pt2", 00:08:03.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.083 "is_configured": true, 00:08:03.083 "data_offset": 2048, 00:08:03.083 "data_size": 63488 00:08:03.083 } 00:08:03.083 ] 00:08:03.083 }' 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.083 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.341 [2024-12-12 16:04:29.622420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.341 "name": "raid_bdev1", 00:08:03.341 "aliases": [ 00:08:03.341 "b63b324d-510a-4d97-aad5-c8c4c96ad9d7" 00:08:03.341 ], 00:08:03.341 "product_name": "Raid Volume", 00:08:03.341 "block_size": 512, 00:08:03.341 "num_blocks": 126976, 00:08:03.341 "uuid": "b63b324d-510a-4d97-aad5-c8c4c96ad9d7", 00:08:03.341 "assigned_rate_limits": { 00:08:03.341 "rw_ios_per_sec": 0, 00:08:03.341 "rw_mbytes_per_sec": 0, 00:08:03.341 "r_mbytes_per_sec": 0, 00:08:03.341 "w_mbytes_per_sec": 0 00:08:03.341 }, 00:08:03.341 "claimed": false, 00:08:03.341 "zoned": false, 00:08:03.341 "supported_io_types": { 00:08:03.341 "read": true, 00:08:03.341 "write": true, 00:08:03.341 "unmap": true, 00:08:03.341 "flush": true, 00:08:03.341 "reset": true, 00:08:03.341 "nvme_admin": false, 00:08:03.341 "nvme_io": false, 00:08:03.341 "nvme_io_md": false, 00:08:03.341 "write_zeroes": true, 00:08:03.341 "zcopy": false, 00:08:03.341 "get_zone_info": false, 00:08:03.341 "zone_management": false, 00:08:03.341 "zone_append": false, 00:08:03.341 "compare": false, 00:08:03.341 "compare_and_write": false, 00:08:03.341 "abort": false, 00:08:03.341 "seek_hole": false, 00:08:03.341 "seek_data": false, 00:08:03.341 "copy": false, 00:08:03.341 "nvme_iov_md": false 00:08:03.341 }, 00:08:03.341 "memory_domains": [ 00:08:03.341 { 00:08:03.341 "dma_device_id": "system", 00:08:03.341 "dma_device_type": 1 00:08:03.341 }, 00:08:03.341 { 00:08:03.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.341 "dma_device_type": 2 00:08:03.341 }, 00:08:03.341 { 00:08:03.341 "dma_device_id": "system", 00:08:03.341 "dma_device_type": 1 00:08:03.341 }, 00:08:03.341 { 00:08:03.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.341 "dma_device_type": 2 00:08:03.341 } 00:08:03.341 ], 00:08:03.341 "driver_specific": { 00:08:03.341 "raid": { 00:08:03.341 "uuid": "b63b324d-510a-4d97-aad5-c8c4c96ad9d7", 00:08:03.341 "strip_size_kb": 64, 00:08:03.341 "state": "online", 00:08:03.341 "raid_level": "concat", 00:08:03.341 "superblock": true, 00:08:03.341 "num_base_bdevs": 2, 00:08:03.341 "num_base_bdevs_discovered": 2, 00:08:03.341 "num_base_bdevs_operational": 2, 00:08:03.341 "base_bdevs_list": [ 00:08:03.341 { 00:08:03.341 "name": "pt1", 00:08:03.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.341 "is_configured": true, 00:08:03.341 "data_offset": 2048, 00:08:03.341 "data_size": 63488 00:08:03.341 }, 00:08:03.341 { 00:08:03.341 "name": "pt2", 00:08:03.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.341 "is_configured": true, 00:08:03.341 "data_offset": 2048, 00:08:03.341 "data_size": 63488 00:08:03.341 } 00:08:03.341 ] 00:08:03.341 } 00:08:03.341 } 00:08:03.341 }' 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:03.341 pt2' 00:08:03.341 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.599 [2024-12-12 16:04:29.822440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b63b324d-510a-4d97-aad5-c8c4c96ad9d7 '!=' b63b324d-510a-4d97-aad5-c8c4c96ad9d7 ']' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64217 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64217 ']' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64217 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64217 00:08:03.599 killing process with pid 64217 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64217' 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64217 00:08:03.599 16:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64217 00:08:03.599 [2024-12-12 16:04:29.903685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.599 [2024-12-12 16:04:29.903841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.599 [2024-12-12 16:04:29.903985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.599 [2024-12-12 16:04:29.904004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:03.857 [2024-12-12 16:04:30.160196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.234 ************************************ 00:08:05.234 END TEST raid_superblock_test 00:08:05.234 ************************************ 00:08:05.234 16:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:05.234 00:08:05.234 real 0m4.713s 00:08:05.234 user 0m6.428s 00:08:05.234 sys 0m0.843s 00:08:05.234 16:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.234 16:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.234 16:04:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:05.234 16:04:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.234 16:04:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.234 16:04:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.234 ************************************ 00:08:05.234 START TEST raid_read_error_test 00:08:05.234 ************************************ 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zHJdoGSWA0 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64423 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64423 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 64423 ']' 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.234 16:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.494 [2024-12-12 16:04:31.591200] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:05.494 [2024-12-12 16:04:31.591345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64423 ] 00:08:05.494 [2024-12-12 16:04:31.753112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.753 [2024-12-12 16:04:31.896281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.012 [2024-12-12 16:04:32.139757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.012 [2024-12-12 16:04:32.139826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 BaseBdev1_malloc 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 true 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 [2024-12-12 16:04:32.510025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.272 [2024-12-12 16:04:32.510090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.272 [2024-12-12 16:04:32.510111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:06.272 [2024-12-12 16:04:32.510123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.272 [2024-12-12 16:04:32.512633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.272 [2024-12-12 16:04:32.512676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.272 BaseBdev1 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 BaseBdev2_malloc 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 true 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 [2024-12-12 16:04:32.587657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.272 [2024-12-12 16:04:32.587752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.272 [2024-12-12 16:04:32.587774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:06.272 [2024-12-12 16:04:32.587788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.272 [2024-12-12 16:04:32.590509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.272 [2024-12-12 16:04:32.590549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.272 BaseBdev2 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 [2024-12-12 16:04:32.599691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.272 [2024-12-12 16:04:32.601866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.272 [2024-12-12 16:04:32.602085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.272 [2024-12-12 16:04:32.602101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.272 [2024-12-12 16:04:32.602378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:06.272 [2024-12-12 16:04:32.602586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.272 [2024-12-12 16:04:32.602607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:06.272 [2024-12-12 16:04:32.602781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.531 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.531 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.531 "name": "raid_bdev1", 00:08:06.531 "uuid": "b689d0cd-78ba-4dca-adbf-7887ecee731b", 00:08:06.531 "strip_size_kb": 64, 00:08:06.531 "state": "online", 00:08:06.531 "raid_level": "concat", 00:08:06.531 "superblock": true, 00:08:06.531 "num_base_bdevs": 2, 00:08:06.531 "num_base_bdevs_discovered": 2, 00:08:06.531 "num_base_bdevs_operational": 2, 00:08:06.531 "base_bdevs_list": [ 00:08:06.531 { 00:08:06.531 "name": "BaseBdev1", 00:08:06.531 "uuid": "ba959c7f-7282-5fda-b84a-8c93f2b7679d", 00:08:06.531 "is_configured": true, 00:08:06.531 "data_offset": 2048, 00:08:06.531 "data_size": 63488 00:08:06.531 }, 00:08:06.531 { 00:08:06.531 "name": "BaseBdev2", 00:08:06.531 "uuid": "3068e693-8248-52a7-9e6a-5ddaf585772f", 00:08:06.531 "is_configured": true, 00:08:06.531 "data_offset": 2048, 00:08:06.531 "data_size": 63488 00:08:06.531 } 00:08:06.531 ] 00:08:06.531 }' 00:08:06.531 16:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.531 16:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.791 16:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:06.791 16:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.050 [2024-12-12 16:04:33.152334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.988 "name": "raid_bdev1", 00:08:07.988 "uuid": "b689d0cd-78ba-4dca-adbf-7887ecee731b", 00:08:07.988 "strip_size_kb": 64, 00:08:07.988 "state": "online", 00:08:07.988 "raid_level": "concat", 00:08:07.988 "superblock": true, 00:08:07.988 "num_base_bdevs": 2, 00:08:07.988 "num_base_bdevs_discovered": 2, 00:08:07.988 "num_base_bdevs_operational": 2, 00:08:07.988 "base_bdevs_list": [ 00:08:07.988 { 00:08:07.988 "name": "BaseBdev1", 00:08:07.988 "uuid": "ba959c7f-7282-5fda-b84a-8c93f2b7679d", 00:08:07.988 "is_configured": true, 00:08:07.988 "data_offset": 2048, 00:08:07.988 "data_size": 63488 00:08:07.988 }, 00:08:07.988 { 00:08:07.988 "name": "BaseBdev2", 00:08:07.988 "uuid": "3068e693-8248-52a7-9e6a-5ddaf585772f", 00:08:07.988 "is_configured": true, 00:08:07.988 "data_offset": 2048, 00:08:07.988 "data_size": 63488 00:08:07.988 } 00:08:07.988 ] 00:08:07.988 }' 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.988 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.248 [2024-12-12 16:04:34.529548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.248 [2024-12-12 16:04:34.529602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.248 [2024-12-12 16:04:34.532821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.248 [2024-12-12 16:04:34.532881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.248 [2024-12-12 16:04:34.532929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.248 [2024-12-12 16:04:34.532945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:08.248 { 00:08:08.248 "results": [ 00:08:08.248 { 00:08:08.248 "job": "raid_bdev1", 00:08:08.248 "core_mask": "0x1", 00:08:08.248 "workload": "randrw", 00:08:08.248 "percentage": 50, 00:08:08.248 "status": "finished", 00:08:08.248 "queue_depth": 1, 00:08:08.248 "io_size": 131072, 00:08:08.248 "runtime": 1.377798, 00:08:08.248 "iops": 13419.96431987853, 00:08:08.248 "mibps": 1677.4955399848163, 00:08:08.248 "io_failed": 1, 00:08:08.248 "io_timeout": 0, 00:08:08.248 "avg_latency_us": 104.47359057480814, 00:08:08.248 "min_latency_us": 27.72401746724891, 00:08:08.248 "max_latency_us": 1531.0812227074236 00:08:08.248 } 00:08:08.248 ], 00:08:08.248 "core_count": 1 00:08:08.248 } 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64423 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 64423 ']' 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 64423 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64423 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.248 killing process with pid 64423 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64423' 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 64423 00:08:08.248 [2024-12-12 16:04:34.580363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.248 16:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 64423 00:08:08.508 [2024-12-12 16:04:34.743782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zHJdoGSWA0 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:09.885 00:08:09.885 real 0m4.603s 00:08:09.885 user 0m5.382s 00:08:09.885 sys 0m0.685s 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.885 16:04:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 ************************************ 00:08:09.885 END TEST raid_read_error_test 00:08:09.885 ************************************ 00:08:09.885 16:04:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:09.885 16:04:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.885 16:04:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.885 16:04:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 ************************************ 00:08:09.885 START TEST raid_write_error_test 00:08:09.885 ************************************ 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sWecux4RL9 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64569 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64569 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64569 ']' 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.885 16:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.143 [2024-12-12 16:04:36.242164] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:10.143 [2024-12-12 16:04:36.242377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64569 ] 00:08:10.143 [2024-12-12 16:04:36.419079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.399 [2024-12-12 16:04:36.600501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.657 [2024-12-12 16:04:36.866817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.657 [2024-12-12 16:04:36.866938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 BaseBdev1_malloc 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 true 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 [2024-12-12 16:04:37.335062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.222 [2024-12-12 16:04:37.335160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.222 [2024-12-12 16:04:37.335198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.222 [2024-12-12 16:04:37.335216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.222 [2024-12-12 16:04:37.338503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.222 [2024-12-12 16:04:37.338564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.222 BaseBdev1 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 BaseBdev2_malloc 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 true 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 [2024-12-12 16:04:37.405111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.222 [2024-12-12 16:04:37.405214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.222 [2024-12-12 16:04:37.405249] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.222 [2024-12-12 16:04:37.405266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.222 [2024-12-12 16:04:37.408475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.222 [2024-12-12 16:04:37.408531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.222 BaseBdev2 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 [2024-12-12 16:04:37.413417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.222 [2024-12-12 16:04:37.416178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.222 [2024-12-12 16:04:37.416469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.222 [2024-12-12 16:04:37.416496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.222 [2024-12-12 16:04:37.416853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:11.222 [2024-12-12 16:04:37.417127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.222 [2024-12-12 16:04:37.417152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:11.222 [2024-12-12 16:04:37.417457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.222 "name": "raid_bdev1", 00:08:11.222 "uuid": "370af729-2ccb-41ea-b7f3-137f692c812c", 00:08:11.222 "strip_size_kb": 64, 00:08:11.222 "state": "online", 00:08:11.222 "raid_level": "concat", 00:08:11.222 "superblock": true, 00:08:11.222 "num_base_bdevs": 2, 00:08:11.222 "num_base_bdevs_discovered": 2, 00:08:11.222 "num_base_bdevs_operational": 2, 00:08:11.222 "base_bdevs_list": [ 00:08:11.222 { 00:08:11.222 "name": "BaseBdev1", 00:08:11.222 "uuid": "3cdbf6bd-6506-5c31-9ce9-59ee39ef6226", 00:08:11.222 "is_configured": true, 00:08:11.222 "data_offset": 2048, 00:08:11.222 "data_size": 63488 00:08:11.222 }, 00:08:11.222 { 00:08:11.222 "name": "BaseBdev2", 00:08:11.222 "uuid": "f59c0aa3-627c-511b-be96-b923de1da093", 00:08:11.222 "is_configured": true, 00:08:11.222 "data_offset": 2048, 00:08:11.222 "data_size": 63488 00:08:11.222 } 00:08:11.222 ] 00:08:11.222 }' 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.222 16:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.480 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:11.480 16:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:11.737 [2024-12-12 16:04:37.886401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.670 "name": "raid_bdev1", 00:08:12.670 "uuid": "370af729-2ccb-41ea-b7f3-137f692c812c", 00:08:12.670 "strip_size_kb": 64, 00:08:12.670 "state": "online", 00:08:12.670 "raid_level": "concat", 00:08:12.670 "superblock": true, 00:08:12.670 "num_base_bdevs": 2, 00:08:12.670 "num_base_bdevs_discovered": 2, 00:08:12.670 "num_base_bdevs_operational": 2, 00:08:12.670 "base_bdevs_list": [ 00:08:12.670 { 00:08:12.670 "name": "BaseBdev1", 00:08:12.670 "uuid": "3cdbf6bd-6506-5c31-9ce9-59ee39ef6226", 00:08:12.670 "is_configured": true, 00:08:12.670 "data_offset": 2048, 00:08:12.670 "data_size": 63488 00:08:12.670 }, 00:08:12.670 { 00:08:12.670 "name": "BaseBdev2", 00:08:12.670 "uuid": "f59c0aa3-627c-511b-be96-b923de1da093", 00:08:12.670 "is_configured": true, 00:08:12.670 "data_offset": 2048, 00:08:12.670 "data_size": 63488 00:08:12.670 } 00:08:12.670 ] 00:08:12.670 }' 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.670 16:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.930 [2024-12-12 16:04:39.176605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.930 [2024-12-12 16:04:39.176666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.930 [2024-12-12 16:04:39.179347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.930 [2024-12-12 16:04:39.179407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.930 [2024-12-12 16:04:39.179443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.930 [2024-12-12 16:04:39.179459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64569 00:08:12.930 { 00:08:12.930 "results": [ 00:08:12.930 { 00:08:12.930 "job": "raid_bdev1", 00:08:12.930 "core_mask": "0x1", 00:08:12.930 "workload": "randrw", 00:08:12.930 "percentage": 50, 00:08:12.930 "status": "finished", 00:08:12.930 "queue_depth": 1, 00:08:12.930 "io_size": 131072, 00:08:12.930 "runtime": 1.290135, 00:08:12.930 "iops": 11014.351211307343, 00:08:12.930 "mibps": 1376.7939014134179, 00:08:12.930 "io_failed": 1, 00:08:12.930 "io_timeout": 0, 00:08:12.930 "avg_latency_us": 127.2742631561319, 00:08:12.930 "min_latency_us": 27.72401746724891, 00:08:12.930 "max_latency_us": 1888.810480349345 00:08:12.930 } 00:08:12.930 ], 00:08:12.930 "core_count": 1 00:08:12.930 } 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64569 ']' 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64569 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64569 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.930 killing process with pid 64569 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64569' 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64569 00:08:12.930 [2024-12-12 16:04:39.222723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.930 16:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64569 00:08:13.189 [2024-12-12 16:04:39.377635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sWecux4RL9 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:08:14.574 00:08:14.574 real 0m4.565s 00:08:14.574 user 0m5.369s 00:08:14.574 sys 0m0.653s 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.574 16:04:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.574 ************************************ 00:08:14.574 END TEST raid_write_error_test 00:08:14.574 ************************************ 00:08:14.574 16:04:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:14.574 16:04:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:14.574 16:04:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.574 16:04:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.574 16:04:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.574 ************************************ 00:08:14.574 START TEST raid_state_function_test 00:08:14.574 ************************************ 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64718 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64718' 00:08:14.574 Process raid pid: 64718 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64718 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64718 ']' 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.574 16:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.574 [2024-12-12 16:04:40.852178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:14.574 [2024-12-12 16:04:40.852302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.833 [2024-12-12 16:04:41.026585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.833 [2024-12-12 16:04:41.164736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.093 [2024-12-12 16:04:41.406684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.093 [2024-12-12 16:04:41.406751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.661 [2024-12-12 16:04:41.714690] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.661 [2024-12-12 16:04:41.714767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.661 [2024-12-12 16:04:41.714795] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.661 [2024-12-12 16:04:41.714817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.661 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.661 "name": "Existed_Raid", 00:08:15.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.661 "strip_size_kb": 0, 00:08:15.661 "state": "configuring", 00:08:15.661 "raid_level": "raid1", 00:08:15.661 "superblock": false, 00:08:15.661 "num_base_bdevs": 2, 00:08:15.661 "num_base_bdevs_discovered": 0, 00:08:15.661 "num_base_bdevs_operational": 2, 00:08:15.661 "base_bdevs_list": [ 00:08:15.661 { 00:08:15.661 "name": "BaseBdev1", 00:08:15.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.661 "is_configured": false, 00:08:15.661 "data_offset": 0, 00:08:15.661 "data_size": 0 00:08:15.661 }, 00:08:15.661 { 00:08:15.661 "name": "BaseBdev2", 00:08:15.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.661 "is_configured": false, 00:08:15.662 "data_offset": 0, 00:08:15.662 "data_size": 0 00:08:15.662 } 00:08:15.662 ] 00:08:15.662 }' 00:08:15.662 16:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.662 16:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.921 [2024-12-12 16:04:42.157918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.921 [2024-12-12 16:04:42.157975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.921 [2024-12-12 16:04:42.169828] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.921 [2024-12-12 16:04:42.169880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.921 [2024-12-12 16:04:42.169901] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.921 [2024-12-12 16:04:42.169915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.921 [2024-12-12 16:04:42.224341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.921 BaseBdev1 00:08:15.921 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.922 [ 00:08:15.922 { 00:08:15.922 "name": "BaseBdev1", 00:08:15.922 "aliases": [ 00:08:15.922 "5ef8c91e-5f83-4033-aa65-9b32385d0007" 00:08:15.922 ], 00:08:15.922 "product_name": "Malloc disk", 00:08:15.922 "block_size": 512, 00:08:15.922 "num_blocks": 65536, 00:08:15.922 "uuid": "5ef8c91e-5f83-4033-aa65-9b32385d0007", 00:08:15.922 "assigned_rate_limits": { 00:08:15.922 "rw_ios_per_sec": 0, 00:08:15.922 "rw_mbytes_per_sec": 0, 00:08:15.922 "r_mbytes_per_sec": 0, 00:08:15.922 "w_mbytes_per_sec": 0 00:08:15.922 }, 00:08:15.922 "claimed": true, 00:08:15.922 "claim_type": "exclusive_write", 00:08:15.922 "zoned": false, 00:08:15.922 "supported_io_types": { 00:08:15.922 "read": true, 00:08:15.922 "write": true, 00:08:15.922 "unmap": true, 00:08:15.922 "flush": true, 00:08:15.922 "reset": true, 00:08:15.922 "nvme_admin": false, 00:08:15.922 "nvme_io": false, 00:08:15.922 "nvme_io_md": false, 00:08:15.922 "write_zeroes": true, 00:08:15.922 "zcopy": true, 00:08:15.922 "get_zone_info": false, 00:08:15.922 "zone_management": false, 00:08:15.922 "zone_append": false, 00:08:15.922 "compare": false, 00:08:15.922 "compare_and_write": false, 00:08:15.922 "abort": true, 00:08:15.922 "seek_hole": false, 00:08:15.922 "seek_data": false, 00:08:15.922 "copy": true, 00:08:15.922 "nvme_iov_md": false 00:08:15.922 }, 00:08:15.922 "memory_domains": [ 00:08:15.922 { 00:08:15.922 "dma_device_id": "system", 00:08:15.922 "dma_device_type": 1 00:08:15.922 }, 00:08:15.922 { 00:08:15.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.922 "dma_device_type": 2 00:08:15.922 } 00:08:15.922 ], 00:08:15.922 "driver_specific": {} 00:08:15.922 } 00:08:15.922 ] 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.922 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.182 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.182 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.182 "name": "Existed_Raid", 00:08:16.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.182 "strip_size_kb": 0, 00:08:16.182 "state": "configuring", 00:08:16.182 "raid_level": "raid1", 00:08:16.182 "superblock": false, 00:08:16.182 "num_base_bdevs": 2, 00:08:16.182 "num_base_bdevs_discovered": 1, 00:08:16.182 "num_base_bdevs_operational": 2, 00:08:16.182 "base_bdevs_list": [ 00:08:16.182 { 00:08:16.182 "name": "BaseBdev1", 00:08:16.182 "uuid": "5ef8c91e-5f83-4033-aa65-9b32385d0007", 00:08:16.182 "is_configured": true, 00:08:16.182 "data_offset": 0, 00:08:16.182 "data_size": 65536 00:08:16.182 }, 00:08:16.182 { 00:08:16.182 "name": "BaseBdev2", 00:08:16.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.182 "is_configured": false, 00:08:16.182 "data_offset": 0, 00:08:16.182 "data_size": 0 00:08:16.182 } 00:08:16.182 ] 00:08:16.182 }' 00:08:16.182 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.182 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.442 [2024-12-12 16:04:42.723578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.442 [2024-12-12 16:04:42.723662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.442 [2024-12-12 16:04:42.731588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.442 [2024-12-12 16:04:42.733727] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.442 [2024-12-12 16:04:42.733775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.442 "name": "Existed_Raid", 00:08:16.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.442 "strip_size_kb": 0, 00:08:16.442 "state": "configuring", 00:08:16.442 "raid_level": "raid1", 00:08:16.442 "superblock": false, 00:08:16.442 "num_base_bdevs": 2, 00:08:16.442 "num_base_bdevs_discovered": 1, 00:08:16.442 "num_base_bdevs_operational": 2, 00:08:16.442 "base_bdevs_list": [ 00:08:16.442 { 00:08:16.442 "name": "BaseBdev1", 00:08:16.442 "uuid": "5ef8c91e-5f83-4033-aa65-9b32385d0007", 00:08:16.442 "is_configured": true, 00:08:16.442 "data_offset": 0, 00:08:16.442 "data_size": 65536 00:08:16.442 }, 00:08:16.442 { 00:08:16.442 "name": "BaseBdev2", 00:08:16.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.442 "is_configured": false, 00:08:16.442 "data_offset": 0, 00:08:16.442 "data_size": 0 00:08:16.442 } 00:08:16.442 ] 00:08:16.442 }' 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.442 16:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 [2024-12-12 16:04:43.223080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.012 [2024-12-12 16:04:43.223151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.012 [2024-12-12 16:04:43.223159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:17.012 [2024-12-12 16:04:43.223438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.012 [2024-12-12 16:04:43.223663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.012 [2024-12-12 16:04:43.223694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.012 [2024-12-12 16:04:43.224005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.012 BaseBdev2 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 [ 00:08:17.012 { 00:08:17.012 "name": "BaseBdev2", 00:08:17.012 "aliases": [ 00:08:17.012 "bc845db4-8dc7-4567-9bf6-89e92a8ef93d" 00:08:17.012 ], 00:08:17.012 "product_name": "Malloc disk", 00:08:17.012 "block_size": 512, 00:08:17.012 "num_blocks": 65536, 00:08:17.012 "uuid": "bc845db4-8dc7-4567-9bf6-89e92a8ef93d", 00:08:17.012 "assigned_rate_limits": { 00:08:17.012 "rw_ios_per_sec": 0, 00:08:17.012 "rw_mbytes_per_sec": 0, 00:08:17.012 "r_mbytes_per_sec": 0, 00:08:17.012 "w_mbytes_per_sec": 0 00:08:17.012 }, 00:08:17.012 "claimed": true, 00:08:17.012 "claim_type": "exclusive_write", 00:08:17.012 "zoned": false, 00:08:17.012 "supported_io_types": { 00:08:17.012 "read": true, 00:08:17.012 "write": true, 00:08:17.012 "unmap": true, 00:08:17.012 "flush": true, 00:08:17.012 "reset": true, 00:08:17.012 "nvme_admin": false, 00:08:17.012 "nvme_io": false, 00:08:17.012 "nvme_io_md": false, 00:08:17.012 "write_zeroes": true, 00:08:17.012 "zcopy": true, 00:08:17.012 "get_zone_info": false, 00:08:17.012 "zone_management": false, 00:08:17.012 "zone_append": false, 00:08:17.012 "compare": false, 00:08:17.012 "compare_and_write": false, 00:08:17.012 "abort": true, 00:08:17.012 "seek_hole": false, 00:08:17.012 "seek_data": false, 00:08:17.012 "copy": true, 00:08:17.012 "nvme_iov_md": false 00:08:17.012 }, 00:08:17.012 "memory_domains": [ 00:08:17.012 { 00:08:17.012 "dma_device_id": "system", 00:08:17.012 "dma_device_type": 1 00:08:17.012 }, 00:08:17.012 { 00:08:17.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.012 "dma_device_type": 2 00:08:17.012 } 00:08:17.012 ], 00:08:17.012 "driver_specific": {} 00:08:17.012 } 00:08:17.012 ] 00:08:17.012 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.013 "name": "Existed_Raid", 00:08:17.013 "uuid": "50b859c6-6d8e-47c4-9c13-27078693b842", 00:08:17.013 "strip_size_kb": 0, 00:08:17.013 "state": "online", 00:08:17.013 "raid_level": "raid1", 00:08:17.013 "superblock": false, 00:08:17.013 "num_base_bdevs": 2, 00:08:17.013 "num_base_bdevs_discovered": 2, 00:08:17.013 "num_base_bdevs_operational": 2, 00:08:17.013 "base_bdevs_list": [ 00:08:17.013 { 00:08:17.013 "name": "BaseBdev1", 00:08:17.013 "uuid": "5ef8c91e-5f83-4033-aa65-9b32385d0007", 00:08:17.013 "is_configured": true, 00:08:17.013 "data_offset": 0, 00:08:17.013 "data_size": 65536 00:08:17.013 }, 00:08:17.013 { 00:08:17.013 "name": "BaseBdev2", 00:08:17.013 "uuid": "bc845db4-8dc7-4567-9bf6-89e92a8ef93d", 00:08:17.013 "is_configured": true, 00:08:17.013 "data_offset": 0, 00:08:17.013 "data_size": 65536 00:08:17.013 } 00:08:17.013 ] 00:08:17.013 }' 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.013 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.583 [2024-12-12 16:04:43.710627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.583 "name": "Existed_Raid", 00:08:17.583 "aliases": [ 00:08:17.583 "50b859c6-6d8e-47c4-9c13-27078693b842" 00:08:17.583 ], 00:08:17.583 "product_name": "Raid Volume", 00:08:17.583 "block_size": 512, 00:08:17.583 "num_blocks": 65536, 00:08:17.583 "uuid": "50b859c6-6d8e-47c4-9c13-27078693b842", 00:08:17.583 "assigned_rate_limits": { 00:08:17.583 "rw_ios_per_sec": 0, 00:08:17.583 "rw_mbytes_per_sec": 0, 00:08:17.583 "r_mbytes_per_sec": 0, 00:08:17.583 "w_mbytes_per_sec": 0 00:08:17.583 }, 00:08:17.583 "claimed": false, 00:08:17.583 "zoned": false, 00:08:17.583 "supported_io_types": { 00:08:17.583 "read": true, 00:08:17.583 "write": true, 00:08:17.583 "unmap": false, 00:08:17.583 "flush": false, 00:08:17.583 "reset": true, 00:08:17.583 "nvme_admin": false, 00:08:17.583 "nvme_io": false, 00:08:17.583 "nvme_io_md": false, 00:08:17.583 "write_zeroes": true, 00:08:17.583 "zcopy": false, 00:08:17.583 "get_zone_info": false, 00:08:17.583 "zone_management": false, 00:08:17.583 "zone_append": false, 00:08:17.583 "compare": false, 00:08:17.583 "compare_and_write": false, 00:08:17.583 "abort": false, 00:08:17.583 "seek_hole": false, 00:08:17.583 "seek_data": false, 00:08:17.583 "copy": false, 00:08:17.583 "nvme_iov_md": false 00:08:17.583 }, 00:08:17.583 "memory_domains": [ 00:08:17.583 { 00:08:17.583 "dma_device_id": "system", 00:08:17.583 "dma_device_type": 1 00:08:17.583 }, 00:08:17.583 { 00:08:17.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.583 "dma_device_type": 2 00:08:17.583 }, 00:08:17.583 { 00:08:17.583 "dma_device_id": "system", 00:08:17.583 "dma_device_type": 1 00:08:17.583 }, 00:08:17.583 { 00:08:17.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.583 "dma_device_type": 2 00:08:17.583 } 00:08:17.583 ], 00:08:17.583 "driver_specific": { 00:08:17.583 "raid": { 00:08:17.583 "uuid": "50b859c6-6d8e-47c4-9c13-27078693b842", 00:08:17.583 "strip_size_kb": 0, 00:08:17.583 "state": "online", 00:08:17.583 "raid_level": "raid1", 00:08:17.583 "superblock": false, 00:08:17.583 "num_base_bdevs": 2, 00:08:17.583 "num_base_bdevs_discovered": 2, 00:08:17.583 "num_base_bdevs_operational": 2, 00:08:17.583 "base_bdevs_list": [ 00:08:17.583 { 00:08:17.583 "name": "BaseBdev1", 00:08:17.583 "uuid": "5ef8c91e-5f83-4033-aa65-9b32385d0007", 00:08:17.583 "is_configured": true, 00:08:17.583 "data_offset": 0, 00:08:17.583 "data_size": 65536 00:08:17.583 }, 00:08:17.583 { 00:08:17.583 "name": "BaseBdev2", 00:08:17.583 "uuid": "bc845db4-8dc7-4567-9bf6-89e92a8ef93d", 00:08:17.583 "is_configured": true, 00:08:17.583 "data_offset": 0, 00:08:17.583 "data_size": 65536 00:08:17.583 } 00:08:17.583 ] 00:08:17.583 } 00:08:17.583 } 00:08:17.583 }' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:17.583 BaseBdev2' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.583 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.843 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.843 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.843 16:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:17.843 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.843 16:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.843 [2024-12-12 16:04:43.946016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.843 "name": "Existed_Raid", 00:08:17.843 "uuid": "50b859c6-6d8e-47c4-9c13-27078693b842", 00:08:17.843 "strip_size_kb": 0, 00:08:17.843 "state": "online", 00:08:17.843 "raid_level": "raid1", 00:08:17.843 "superblock": false, 00:08:17.843 "num_base_bdevs": 2, 00:08:17.843 "num_base_bdevs_discovered": 1, 00:08:17.843 "num_base_bdevs_operational": 1, 00:08:17.843 "base_bdevs_list": [ 00:08:17.843 { 00:08:17.843 "name": null, 00:08:17.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.843 "is_configured": false, 00:08:17.843 "data_offset": 0, 00:08:17.843 "data_size": 65536 00:08:17.843 }, 00:08:17.843 { 00:08:17.843 "name": "BaseBdev2", 00:08:17.843 "uuid": "bc845db4-8dc7-4567-9bf6-89e92a8ef93d", 00:08:17.843 "is_configured": true, 00:08:17.843 "data_offset": 0, 00:08:17.843 "data_size": 65536 00:08:17.843 } 00:08:17.843 ] 00:08:17.843 }' 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.843 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.413 [2024-12-12 16:04:44.598288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.413 [2024-12-12 16:04:44.598430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.413 [2024-12-12 16:04:44.705540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.413 [2024-12-12 16:04:44.705609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.413 [2024-12-12 16:04:44.705624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64718 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64718 ']' 00:08:18.413 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64718 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64718 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64718' 00:08:18.672 killing process with pid 64718 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64718 00:08:18.672 [2024-12-12 16:04:44.805280] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.672 16:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64718 00:08:18.672 [2024-12-12 16:04:44.822933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.053 00:08:20.053 real 0m5.355s 00:08:20.053 user 0m7.600s 00:08:20.053 sys 0m0.908s 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.053 ************************************ 00:08:20.053 END TEST raid_state_function_test 00:08:20.053 ************************************ 00:08:20.053 16:04:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:20.053 16:04:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:20.053 16:04:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.053 16:04:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.053 ************************************ 00:08:20.053 START TEST raid_state_function_test_sb 00:08:20.053 ************************************ 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:20.053 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64971 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64971' 00:08:20.054 Process raid pid: 64971 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64971 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64971 ']' 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.054 16:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.054 [2024-12-12 16:04:46.275535] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:20.054 [2024-12-12 16:04:46.275689] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.312 [2024-12-12 16:04:46.458848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.312 [2024-12-12 16:04:46.621427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.570 [2024-12-12 16:04:46.908931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.570 [2024-12-12 16:04:46.908985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.829 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.829 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:20.829 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:20.829 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.829 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.087 [2024-12-12 16:04:47.184283] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.087 [2024-12-12 16:04:47.184362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.087 [2024-12-12 16:04:47.184377] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.087 [2024-12-12 16:04:47.184390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.087 "name": "Existed_Raid", 00:08:21.087 "uuid": "aaee2126-ad88-4894-bfc5-4b5d6f64dd70", 00:08:21.087 "strip_size_kb": 0, 00:08:21.087 "state": "configuring", 00:08:21.087 "raid_level": "raid1", 00:08:21.087 "superblock": true, 00:08:21.087 "num_base_bdevs": 2, 00:08:21.087 "num_base_bdevs_discovered": 0, 00:08:21.087 "num_base_bdevs_operational": 2, 00:08:21.087 "base_bdevs_list": [ 00:08:21.087 { 00:08:21.087 "name": "BaseBdev1", 00:08:21.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.087 "is_configured": false, 00:08:21.087 "data_offset": 0, 00:08:21.087 "data_size": 0 00:08:21.087 }, 00:08:21.087 { 00:08:21.087 "name": "BaseBdev2", 00:08:21.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.087 "is_configured": false, 00:08:21.087 "data_offset": 0, 00:08:21.087 "data_size": 0 00:08:21.087 } 00:08:21.087 ] 00:08:21.087 }' 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.087 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.346 [2024-12-12 16:04:47.664131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.346 [2024-12-12 16:04:47.664192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.346 [2024-12-12 16:04:47.672096] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.346 [2024-12-12 16:04:47.672147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.346 [2024-12-12 16:04:47.672159] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.346 [2024-12-12 16:04:47.672175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.346 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.604 [2024-12-12 16:04:47.730799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.604 BaseBdev1 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.604 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.604 [ 00:08:21.604 { 00:08:21.604 "name": "BaseBdev1", 00:08:21.604 "aliases": [ 00:08:21.604 "ffa351ce-6809-4314-9b90-ccac35bfb646" 00:08:21.604 ], 00:08:21.604 "product_name": "Malloc disk", 00:08:21.604 "block_size": 512, 00:08:21.604 "num_blocks": 65536, 00:08:21.604 "uuid": "ffa351ce-6809-4314-9b90-ccac35bfb646", 00:08:21.604 "assigned_rate_limits": { 00:08:21.604 "rw_ios_per_sec": 0, 00:08:21.604 "rw_mbytes_per_sec": 0, 00:08:21.604 "r_mbytes_per_sec": 0, 00:08:21.604 "w_mbytes_per_sec": 0 00:08:21.604 }, 00:08:21.604 "claimed": true, 00:08:21.604 "claim_type": "exclusive_write", 00:08:21.605 "zoned": false, 00:08:21.605 "supported_io_types": { 00:08:21.605 "read": true, 00:08:21.605 "write": true, 00:08:21.605 "unmap": true, 00:08:21.605 "flush": true, 00:08:21.605 "reset": true, 00:08:21.605 "nvme_admin": false, 00:08:21.605 "nvme_io": false, 00:08:21.605 "nvme_io_md": false, 00:08:21.605 "write_zeroes": true, 00:08:21.605 "zcopy": true, 00:08:21.605 "get_zone_info": false, 00:08:21.605 "zone_management": false, 00:08:21.605 "zone_append": false, 00:08:21.605 "compare": false, 00:08:21.605 "compare_and_write": false, 00:08:21.605 "abort": true, 00:08:21.605 "seek_hole": false, 00:08:21.605 "seek_data": false, 00:08:21.605 "copy": true, 00:08:21.605 "nvme_iov_md": false 00:08:21.605 }, 00:08:21.605 "memory_domains": [ 00:08:21.605 { 00:08:21.605 "dma_device_id": "system", 00:08:21.605 "dma_device_type": 1 00:08:21.605 }, 00:08:21.605 { 00:08:21.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.605 "dma_device_type": 2 00:08:21.605 } 00:08:21.605 ], 00:08:21.605 "driver_specific": {} 00:08:21.605 } 00:08:21.605 ] 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.605 "name": "Existed_Raid", 00:08:21.605 "uuid": "31b4154a-5275-47ce-aad1-1290e35c8a9b", 00:08:21.605 "strip_size_kb": 0, 00:08:21.605 "state": "configuring", 00:08:21.605 "raid_level": "raid1", 00:08:21.605 "superblock": true, 00:08:21.605 "num_base_bdevs": 2, 00:08:21.605 "num_base_bdevs_discovered": 1, 00:08:21.605 "num_base_bdevs_operational": 2, 00:08:21.605 "base_bdevs_list": [ 00:08:21.605 { 00:08:21.605 "name": "BaseBdev1", 00:08:21.605 "uuid": "ffa351ce-6809-4314-9b90-ccac35bfb646", 00:08:21.605 "is_configured": true, 00:08:21.605 "data_offset": 2048, 00:08:21.605 "data_size": 63488 00:08:21.605 }, 00:08:21.605 { 00:08:21.605 "name": "BaseBdev2", 00:08:21.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.605 "is_configured": false, 00:08:21.605 "data_offset": 0, 00:08:21.605 "data_size": 0 00:08:21.605 } 00:08:21.605 ] 00:08:21.605 }' 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.605 16:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.864 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.864 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.864 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.864 [2024-12-12 16:04:48.214086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.864 [2024-12-12 16:04:48.214251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.123 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.123 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.123 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.123 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.123 [2024-12-12 16:04:48.222112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.123 [2024-12-12 16:04:48.224318] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.123 [2024-12-12 16:04:48.224400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.123 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.123 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.124 "name": "Existed_Raid", 00:08:22.124 "uuid": "cd61fd30-40c5-451c-a3a7-0cec0a924d84", 00:08:22.124 "strip_size_kb": 0, 00:08:22.124 "state": "configuring", 00:08:22.124 "raid_level": "raid1", 00:08:22.124 "superblock": true, 00:08:22.124 "num_base_bdevs": 2, 00:08:22.124 "num_base_bdevs_discovered": 1, 00:08:22.124 "num_base_bdevs_operational": 2, 00:08:22.124 "base_bdevs_list": [ 00:08:22.124 { 00:08:22.124 "name": "BaseBdev1", 00:08:22.124 "uuid": "ffa351ce-6809-4314-9b90-ccac35bfb646", 00:08:22.124 "is_configured": true, 00:08:22.124 "data_offset": 2048, 00:08:22.124 "data_size": 63488 00:08:22.124 }, 00:08:22.124 { 00:08:22.124 "name": "BaseBdev2", 00:08:22.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.124 "is_configured": false, 00:08:22.124 "data_offset": 0, 00:08:22.124 "data_size": 0 00:08:22.124 } 00:08:22.124 ] 00:08:22.124 }' 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.124 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.383 [2024-12-12 16:04:48.667396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.383 [2024-12-12 16:04:48.667833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.383 [2024-12-12 16:04:48.667854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:22.383 [2024-12-12 16:04:48.668166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.383 [2024-12-12 16:04:48.668343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.383 BaseBdev2 00:08:22.383 [2024-12-12 16:04:48.668359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:22.383 [2024-12-12 16:04:48.668509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.383 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [ 00:08:22.384 { 00:08:22.384 "name": "BaseBdev2", 00:08:22.384 "aliases": [ 00:08:22.384 "fe733f5d-db38-4355-b36d-bb2acabb770f" 00:08:22.384 ], 00:08:22.384 "product_name": "Malloc disk", 00:08:22.384 "block_size": 512, 00:08:22.384 "num_blocks": 65536, 00:08:22.384 "uuid": "fe733f5d-db38-4355-b36d-bb2acabb770f", 00:08:22.384 "assigned_rate_limits": { 00:08:22.384 "rw_ios_per_sec": 0, 00:08:22.384 "rw_mbytes_per_sec": 0, 00:08:22.384 "r_mbytes_per_sec": 0, 00:08:22.384 "w_mbytes_per_sec": 0 00:08:22.384 }, 00:08:22.384 "claimed": true, 00:08:22.384 "claim_type": "exclusive_write", 00:08:22.384 "zoned": false, 00:08:22.384 "supported_io_types": { 00:08:22.384 "read": true, 00:08:22.384 "write": true, 00:08:22.384 "unmap": true, 00:08:22.384 "flush": true, 00:08:22.384 "reset": true, 00:08:22.384 "nvme_admin": false, 00:08:22.384 "nvme_io": false, 00:08:22.384 "nvme_io_md": false, 00:08:22.384 "write_zeroes": true, 00:08:22.384 "zcopy": true, 00:08:22.384 "get_zone_info": false, 00:08:22.384 "zone_management": false, 00:08:22.384 "zone_append": false, 00:08:22.384 "compare": false, 00:08:22.384 "compare_and_write": false, 00:08:22.384 "abort": true, 00:08:22.384 "seek_hole": false, 00:08:22.384 "seek_data": false, 00:08:22.384 "copy": true, 00:08:22.384 "nvme_iov_md": false 00:08:22.384 }, 00:08:22.384 "memory_domains": [ 00:08:22.384 { 00:08:22.384 "dma_device_id": "system", 00:08:22.384 "dma_device_type": 1 00:08:22.384 }, 00:08:22.384 { 00:08:22.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.384 "dma_device_type": 2 00:08:22.384 } 00:08:22.384 ], 00:08:22.384 "driver_specific": {} 00:08:22.384 } 00:08:22.384 ] 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.643 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.643 "name": "Existed_Raid", 00:08:22.643 "uuid": "cd61fd30-40c5-451c-a3a7-0cec0a924d84", 00:08:22.643 "strip_size_kb": 0, 00:08:22.643 "state": "online", 00:08:22.643 "raid_level": "raid1", 00:08:22.643 "superblock": true, 00:08:22.643 "num_base_bdevs": 2, 00:08:22.643 "num_base_bdevs_discovered": 2, 00:08:22.643 "num_base_bdevs_operational": 2, 00:08:22.643 "base_bdevs_list": [ 00:08:22.643 { 00:08:22.643 "name": "BaseBdev1", 00:08:22.643 "uuid": "ffa351ce-6809-4314-9b90-ccac35bfb646", 00:08:22.643 "is_configured": true, 00:08:22.643 "data_offset": 2048, 00:08:22.643 "data_size": 63488 00:08:22.643 }, 00:08:22.643 { 00:08:22.643 "name": "BaseBdev2", 00:08:22.643 "uuid": "fe733f5d-db38-4355-b36d-bb2acabb770f", 00:08:22.643 "is_configured": true, 00:08:22.643 "data_offset": 2048, 00:08:22.643 "data_size": 63488 00:08:22.643 } 00:08:22.643 ] 00:08:22.643 }' 00:08:22.643 16:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.643 16:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.948 [2024-12-12 16:04:49.118974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.948 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.948 "name": "Existed_Raid", 00:08:22.948 "aliases": [ 00:08:22.948 "cd61fd30-40c5-451c-a3a7-0cec0a924d84" 00:08:22.948 ], 00:08:22.948 "product_name": "Raid Volume", 00:08:22.948 "block_size": 512, 00:08:22.948 "num_blocks": 63488, 00:08:22.948 "uuid": "cd61fd30-40c5-451c-a3a7-0cec0a924d84", 00:08:22.948 "assigned_rate_limits": { 00:08:22.948 "rw_ios_per_sec": 0, 00:08:22.948 "rw_mbytes_per_sec": 0, 00:08:22.948 "r_mbytes_per_sec": 0, 00:08:22.948 "w_mbytes_per_sec": 0 00:08:22.948 }, 00:08:22.948 "claimed": false, 00:08:22.948 "zoned": false, 00:08:22.948 "supported_io_types": { 00:08:22.948 "read": true, 00:08:22.948 "write": true, 00:08:22.948 "unmap": false, 00:08:22.948 "flush": false, 00:08:22.948 "reset": true, 00:08:22.948 "nvme_admin": false, 00:08:22.948 "nvme_io": false, 00:08:22.948 "nvme_io_md": false, 00:08:22.948 "write_zeroes": true, 00:08:22.948 "zcopy": false, 00:08:22.948 "get_zone_info": false, 00:08:22.948 "zone_management": false, 00:08:22.948 "zone_append": false, 00:08:22.948 "compare": false, 00:08:22.948 "compare_and_write": false, 00:08:22.948 "abort": false, 00:08:22.948 "seek_hole": false, 00:08:22.948 "seek_data": false, 00:08:22.948 "copy": false, 00:08:22.948 "nvme_iov_md": false 00:08:22.948 }, 00:08:22.948 "memory_domains": [ 00:08:22.948 { 00:08:22.948 "dma_device_id": "system", 00:08:22.948 "dma_device_type": 1 00:08:22.948 }, 00:08:22.948 { 00:08:22.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.948 "dma_device_type": 2 00:08:22.948 }, 00:08:22.948 { 00:08:22.948 "dma_device_id": "system", 00:08:22.948 "dma_device_type": 1 00:08:22.948 }, 00:08:22.948 { 00:08:22.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.948 "dma_device_type": 2 00:08:22.948 } 00:08:22.948 ], 00:08:22.948 "driver_specific": { 00:08:22.948 "raid": { 00:08:22.948 "uuid": "cd61fd30-40c5-451c-a3a7-0cec0a924d84", 00:08:22.948 "strip_size_kb": 0, 00:08:22.948 "state": "online", 00:08:22.948 "raid_level": "raid1", 00:08:22.948 "superblock": true, 00:08:22.948 "num_base_bdevs": 2, 00:08:22.948 "num_base_bdevs_discovered": 2, 00:08:22.948 "num_base_bdevs_operational": 2, 00:08:22.948 "base_bdevs_list": [ 00:08:22.948 { 00:08:22.948 "name": "BaseBdev1", 00:08:22.948 "uuid": "ffa351ce-6809-4314-9b90-ccac35bfb646", 00:08:22.948 "is_configured": true, 00:08:22.948 "data_offset": 2048, 00:08:22.948 "data_size": 63488 00:08:22.948 }, 00:08:22.948 { 00:08:22.949 "name": "BaseBdev2", 00:08:22.949 "uuid": "fe733f5d-db38-4355-b36d-bb2acabb770f", 00:08:22.949 "is_configured": true, 00:08:22.949 "data_offset": 2048, 00:08:22.949 "data_size": 63488 00:08:22.949 } 00:08:22.949 ] 00:08:22.949 } 00:08:22.949 } 00:08:22.949 }' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:22.949 BaseBdev2' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.949 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.213 [2024-12-12 16:04:49.318355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.213 "name": "Existed_Raid", 00:08:23.213 "uuid": "cd61fd30-40c5-451c-a3a7-0cec0a924d84", 00:08:23.213 "strip_size_kb": 0, 00:08:23.213 "state": "online", 00:08:23.213 "raid_level": "raid1", 00:08:23.213 "superblock": true, 00:08:23.213 "num_base_bdevs": 2, 00:08:23.213 "num_base_bdevs_discovered": 1, 00:08:23.213 "num_base_bdevs_operational": 1, 00:08:23.213 "base_bdevs_list": [ 00:08:23.213 { 00:08:23.213 "name": null, 00:08:23.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.213 "is_configured": false, 00:08:23.213 "data_offset": 0, 00:08:23.213 "data_size": 63488 00:08:23.213 }, 00:08:23.213 { 00:08:23.213 "name": "BaseBdev2", 00:08:23.213 "uuid": "fe733f5d-db38-4355-b36d-bb2acabb770f", 00:08:23.213 "is_configured": true, 00:08:23.213 "data_offset": 2048, 00:08:23.213 "data_size": 63488 00:08:23.213 } 00:08:23.213 ] 00:08:23.213 }' 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.213 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.783 [2024-12-12 16:04:49.885214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.783 [2024-12-12 16:04:49.885349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.783 [2024-12-12 16:04:49.992231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.783 [2024-12-12 16:04:49.992299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.783 [2024-12-12 16:04:49.992314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.783 16:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64971 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64971 ']' 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64971 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64971 00:08:23.783 killing process with pid 64971 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64971' 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64971 00:08:23.783 16:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64971 00:08:23.783 [2024-12-12 16:04:50.066861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.783 [2024-12-12 16:04:50.083913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.164 ************************************ 00:08:25.164 END TEST raid_state_function_test_sb 00:08:25.164 ************************************ 00:08:25.164 16:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:25.164 00:08:25.164 real 0m5.217s 00:08:25.164 user 0m7.320s 00:08:25.164 sys 0m0.909s 00:08:25.164 16:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.164 16:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.164 16:04:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:25.164 16:04:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:25.164 16:04:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.164 16:04:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.164 ************************************ 00:08:25.164 START TEST raid_superblock_test 00:08:25.164 ************************************ 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65223 00:08:25.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65223 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65223 ']' 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.164 16:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:25.424 [2024-12-12 16:04:51.569841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:25.424 [2024-12-12 16:04:51.570072] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65223 ] 00:08:25.424 [2024-12-12 16:04:51.747013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.683 [2024-12-12 16:04:51.900748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.942 [2024-12-12 16:04:52.160776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.942 [2024-12-12 16:04:52.160972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.201 malloc1 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.201 [2024-12-12 16:04:52.463924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.201 [2024-12-12 16:04:52.464113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.201 [2024-12-12 16:04:52.464179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.201 [2024-12-12 16:04:52.464220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.201 [2024-12-12 16:04:52.466943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.201 [2024-12-12 16:04:52.467027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.201 pt1 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.201 malloc2 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.201 [2024-12-12 16:04:52.532674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.201 [2024-12-12 16:04:52.532754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.201 [2024-12-12 16:04:52.532783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:26.201 [2024-12-12 16:04:52.532793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.201 [2024-12-12 16:04:52.535456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.201 [2024-12-12 16:04:52.535499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.201 pt2 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.201 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.201 [2024-12-12 16:04:52.544726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.201 [2024-12-12 16:04:52.546995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.201 [2024-12-12 16:04:52.547199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.201 [2024-12-12 16:04:52.547219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.201 [2024-12-12 16:04:52.547545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.201 [2024-12-12 16:04:52.547757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.201 [2024-12-12 16:04:52.547777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:26.201 [2024-12-12 16:04:52.548024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.460 "name": "raid_bdev1", 00:08:26.460 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:26.460 "strip_size_kb": 0, 00:08:26.460 "state": "online", 00:08:26.460 "raid_level": "raid1", 00:08:26.460 "superblock": true, 00:08:26.460 "num_base_bdevs": 2, 00:08:26.460 "num_base_bdevs_discovered": 2, 00:08:26.460 "num_base_bdevs_operational": 2, 00:08:26.460 "base_bdevs_list": [ 00:08:26.460 { 00:08:26.460 "name": "pt1", 00:08:26.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.460 "is_configured": true, 00:08:26.460 "data_offset": 2048, 00:08:26.460 "data_size": 63488 00:08:26.460 }, 00:08:26.460 { 00:08:26.460 "name": "pt2", 00:08:26.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.460 "is_configured": true, 00:08:26.460 "data_offset": 2048, 00:08:26.460 "data_size": 63488 00:08:26.460 } 00:08:26.460 ] 00:08:26.460 }' 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.460 16:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.720 [2024-12-12 16:04:53.032247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.720 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.720 "name": "raid_bdev1", 00:08:26.720 "aliases": [ 00:08:26.720 "5d3e05f2-aad7-4cc4-8377-9e613986c05c" 00:08:26.720 ], 00:08:26.720 "product_name": "Raid Volume", 00:08:26.720 "block_size": 512, 00:08:26.720 "num_blocks": 63488, 00:08:26.720 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:26.720 "assigned_rate_limits": { 00:08:26.720 "rw_ios_per_sec": 0, 00:08:26.720 "rw_mbytes_per_sec": 0, 00:08:26.720 "r_mbytes_per_sec": 0, 00:08:26.720 "w_mbytes_per_sec": 0 00:08:26.720 }, 00:08:26.720 "claimed": false, 00:08:26.720 "zoned": false, 00:08:26.720 "supported_io_types": { 00:08:26.720 "read": true, 00:08:26.720 "write": true, 00:08:26.720 "unmap": false, 00:08:26.720 "flush": false, 00:08:26.720 "reset": true, 00:08:26.720 "nvme_admin": false, 00:08:26.720 "nvme_io": false, 00:08:26.720 "nvme_io_md": false, 00:08:26.720 "write_zeroes": true, 00:08:26.720 "zcopy": false, 00:08:26.720 "get_zone_info": false, 00:08:26.720 "zone_management": false, 00:08:26.720 "zone_append": false, 00:08:26.720 "compare": false, 00:08:26.720 "compare_and_write": false, 00:08:26.720 "abort": false, 00:08:26.720 "seek_hole": false, 00:08:26.720 "seek_data": false, 00:08:26.720 "copy": false, 00:08:26.720 "nvme_iov_md": false 00:08:26.720 }, 00:08:26.720 "memory_domains": [ 00:08:26.720 { 00:08:26.720 "dma_device_id": "system", 00:08:26.720 "dma_device_type": 1 00:08:26.720 }, 00:08:26.720 { 00:08:26.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.720 "dma_device_type": 2 00:08:26.720 }, 00:08:26.720 { 00:08:26.720 "dma_device_id": "system", 00:08:26.720 "dma_device_type": 1 00:08:26.720 }, 00:08:26.720 { 00:08:26.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.720 "dma_device_type": 2 00:08:26.720 } 00:08:26.720 ], 00:08:26.720 "driver_specific": { 00:08:26.720 "raid": { 00:08:26.720 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:26.720 "strip_size_kb": 0, 00:08:26.720 "state": "online", 00:08:26.720 "raid_level": "raid1", 00:08:26.720 "superblock": true, 00:08:26.720 "num_base_bdevs": 2, 00:08:26.720 "num_base_bdevs_discovered": 2, 00:08:26.720 "num_base_bdevs_operational": 2, 00:08:26.720 "base_bdevs_list": [ 00:08:26.720 { 00:08:26.720 "name": "pt1", 00:08:26.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.720 "is_configured": true, 00:08:26.720 "data_offset": 2048, 00:08:26.720 "data_size": 63488 00:08:26.720 }, 00:08:26.720 { 00:08:26.720 "name": "pt2", 00:08:26.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.720 "is_configured": true, 00:08:26.720 "data_offset": 2048, 00:08:26.720 "data_size": 63488 00:08:26.720 } 00:08:26.720 ] 00:08:26.720 } 00:08:26.720 } 00:08:26.720 }' 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:26.979 pt2' 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.979 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.980 [2024-12-12 16:04:53.279803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5d3e05f2-aad7-4cc4-8377-9e613986c05c 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5d3e05f2-aad7-4cc4-8377-9e613986c05c ']' 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.980 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.980 [2024-12-12 16:04:53.327366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.980 [2024-12-12 16:04:53.327414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.980 [2024-12-12 16:04:53.327536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.980 [2024-12-12 16:04:53.327607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.980 [2024-12-12 16:04:53.327625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 [2024-12-12 16:04:53.447210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:27.240 [2024-12-12 16:04:53.449710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:27.240 [2024-12-12 16:04:53.449836] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:27.240 [2024-12-12 16:04:53.449951] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:27.240 [2024-12-12 16:04:53.450008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.240 [2024-12-12 16:04:53.450044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:27.240 request: 00:08:27.240 { 00:08:27.240 "name": "raid_bdev1", 00:08:27.240 "raid_level": "raid1", 00:08:27.240 "base_bdevs": [ 00:08:27.240 "malloc1", 00:08:27.240 "malloc2" 00:08:27.240 ], 00:08:27.240 "superblock": false, 00:08:27.240 "method": "bdev_raid_create", 00:08:27.240 "req_id": 1 00:08:27.240 } 00:08:27.240 Got JSON-RPC error response 00:08:27.240 response: 00:08:27.240 { 00:08:27.240 "code": -17, 00:08:27.240 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:27.240 } 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 [2024-12-12 16:04:53.507107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.240 [2024-12-12 16:04:53.507288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.240 [2024-12-12 16:04:53.507327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:27.240 [2024-12-12 16:04:53.507379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.240 [2024-12-12 16:04:53.510117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.240 [2024-12-12 16:04:53.510198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.240 [2024-12-12 16:04:53.510331] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:27.240 [2024-12-12 16:04:53.510426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.240 pt1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.240 "name": "raid_bdev1", 00:08:27.240 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:27.240 "strip_size_kb": 0, 00:08:27.240 "state": "configuring", 00:08:27.240 "raid_level": "raid1", 00:08:27.240 "superblock": true, 00:08:27.240 "num_base_bdevs": 2, 00:08:27.240 "num_base_bdevs_discovered": 1, 00:08:27.240 "num_base_bdevs_operational": 2, 00:08:27.240 "base_bdevs_list": [ 00:08:27.240 { 00:08:27.240 "name": "pt1", 00:08:27.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.240 "is_configured": true, 00:08:27.240 "data_offset": 2048, 00:08:27.240 "data_size": 63488 00:08:27.240 }, 00:08:27.240 { 00:08:27.240 "name": null, 00:08:27.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.240 "is_configured": false, 00:08:27.240 "data_offset": 2048, 00:08:27.240 "data_size": 63488 00:08:27.240 } 00:08:27.240 ] 00:08:27.240 }' 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.240 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.808 [2024-12-12 16:04:53.970324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.808 [2024-12-12 16:04:53.970532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.808 [2024-12-12 16:04:53.970578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:27.808 [2024-12-12 16:04:53.970616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.808 [2024-12-12 16:04:53.971205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.808 [2024-12-12 16:04:53.971238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.808 [2024-12-12 16:04:53.971338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:27.808 [2024-12-12 16:04:53.971372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.808 [2024-12-12 16:04:53.971509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.808 [2024-12-12 16:04:53.971528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.808 [2024-12-12 16:04:53.971816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:27.808 [2024-12-12 16:04:53.972002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.808 [2024-12-12 16:04:53.972012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:27.808 [2024-12-12 16:04:53.972164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.808 pt2 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.808 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.809 16:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.809 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.809 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.809 "name": "raid_bdev1", 00:08:27.809 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:27.809 "strip_size_kb": 0, 00:08:27.809 "state": "online", 00:08:27.809 "raid_level": "raid1", 00:08:27.809 "superblock": true, 00:08:27.809 "num_base_bdevs": 2, 00:08:27.809 "num_base_bdevs_discovered": 2, 00:08:27.809 "num_base_bdevs_operational": 2, 00:08:27.809 "base_bdevs_list": [ 00:08:27.809 { 00:08:27.809 "name": "pt1", 00:08:27.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.809 "is_configured": true, 00:08:27.809 "data_offset": 2048, 00:08:27.809 "data_size": 63488 00:08:27.809 }, 00:08:27.809 { 00:08:27.809 "name": "pt2", 00:08:27.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.809 "is_configured": true, 00:08:27.809 "data_offset": 2048, 00:08:27.809 "data_size": 63488 00:08:27.809 } 00:08:27.809 ] 00:08:27.809 }' 00:08:27.809 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.809 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.377 [2024-12-12 16:04:54.461837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.377 "name": "raid_bdev1", 00:08:28.377 "aliases": [ 00:08:28.377 "5d3e05f2-aad7-4cc4-8377-9e613986c05c" 00:08:28.377 ], 00:08:28.377 "product_name": "Raid Volume", 00:08:28.377 "block_size": 512, 00:08:28.377 "num_blocks": 63488, 00:08:28.377 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:28.377 "assigned_rate_limits": { 00:08:28.377 "rw_ios_per_sec": 0, 00:08:28.377 "rw_mbytes_per_sec": 0, 00:08:28.377 "r_mbytes_per_sec": 0, 00:08:28.377 "w_mbytes_per_sec": 0 00:08:28.377 }, 00:08:28.377 "claimed": false, 00:08:28.377 "zoned": false, 00:08:28.377 "supported_io_types": { 00:08:28.377 "read": true, 00:08:28.377 "write": true, 00:08:28.377 "unmap": false, 00:08:28.377 "flush": false, 00:08:28.377 "reset": true, 00:08:28.377 "nvme_admin": false, 00:08:28.377 "nvme_io": false, 00:08:28.377 "nvme_io_md": false, 00:08:28.377 "write_zeroes": true, 00:08:28.377 "zcopy": false, 00:08:28.377 "get_zone_info": false, 00:08:28.377 "zone_management": false, 00:08:28.377 "zone_append": false, 00:08:28.377 "compare": false, 00:08:28.377 "compare_and_write": false, 00:08:28.377 "abort": false, 00:08:28.377 "seek_hole": false, 00:08:28.377 "seek_data": false, 00:08:28.377 "copy": false, 00:08:28.377 "nvme_iov_md": false 00:08:28.377 }, 00:08:28.377 "memory_domains": [ 00:08:28.377 { 00:08:28.377 "dma_device_id": "system", 00:08:28.377 "dma_device_type": 1 00:08:28.377 }, 00:08:28.377 { 00:08:28.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.377 "dma_device_type": 2 00:08:28.377 }, 00:08:28.377 { 00:08:28.377 "dma_device_id": "system", 00:08:28.377 "dma_device_type": 1 00:08:28.377 }, 00:08:28.377 { 00:08:28.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.377 "dma_device_type": 2 00:08:28.377 } 00:08:28.377 ], 00:08:28.377 "driver_specific": { 00:08:28.377 "raid": { 00:08:28.377 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:28.377 "strip_size_kb": 0, 00:08:28.377 "state": "online", 00:08:28.377 "raid_level": "raid1", 00:08:28.377 "superblock": true, 00:08:28.377 "num_base_bdevs": 2, 00:08:28.377 "num_base_bdevs_discovered": 2, 00:08:28.377 "num_base_bdevs_operational": 2, 00:08:28.377 "base_bdevs_list": [ 00:08:28.377 { 00:08:28.377 "name": "pt1", 00:08:28.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.377 "is_configured": true, 00:08:28.377 "data_offset": 2048, 00:08:28.377 "data_size": 63488 00:08:28.377 }, 00:08:28.377 { 00:08:28.377 "name": "pt2", 00:08:28.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.377 "is_configured": true, 00:08:28.377 "data_offset": 2048, 00:08:28.377 "data_size": 63488 00:08:28.377 } 00:08:28.377 ] 00:08:28.377 } 00:08:28.377 } 00:08:28.377 }' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.377 pt2' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.377 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.378 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:28.636 [2024-12-12 16:04:54.729372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5d3e05f2-aad7-4cc4-8377-9e613986c05c '!=' 5d3e05f2-aad7-4cc4-8377-9e613986c05c ']' 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.636 [2024-12-12 16:04:54.777117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.636 "name": "raid_bdev1", 00:08:28.636 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:28.636 "strip_size_kb": 0, 00:08:28.636 "state": "online", 00:08:28.636 "raid_level": "raid1", 00:08:28.636 "superblock": true, 00:08:28.636 "num_base_bdevs": 2, 00:08:28.636 "num_base_bdevs_discovered": 1, 00:08:28.636 "num_base_bdevs_operational": 1, 00:08:28.636 "base_bdevs_list": [ 00:08:28.636 { 00:08:28.636 "name": null, 00:08:28.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.636 "is_configured": false, 00:08:28.636 "data_offset": 0, 00:08:28.636 "data_size": 63488 00:08:28.636 }, 00:08:28.636 { 00:08:28.636 "name": "pt2", 00:08:28.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.636 "is_configured": true, 00:08:28.636 "data_offset": 2048, 00:08:28.636 "data_size": 63488 00:08:28.636 } 00:08:28.636 ] 00:08:28.636 }' 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.636 16:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.899 [2024-12-12 16:04:55.220293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.899 [2024-12-12 16:04:55.220347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.899 [2024-12-12 16:04:55.220457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.899 [2024-12-12 16:04:55.220518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.899 [2024-12-12 16:04:55.220533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.899 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.159 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.160 [2024-12-12 16:04:55.296113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.160 [2024-12-12 16:04:55.296211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.160 [2024-12-12 16:04:55.296232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:29.160 [2024-12-12 16:04:55.296245] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.160 [2024-12-12 16:04:55.298940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.160 [2024-12-12 16:04:55.299069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.160 [2024-12-12 16:04:55.299189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.160 [2024-12-12 16:04:55.299245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.160 [2024-12-12 16:04:55.299363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:29.160 [2024-12-12 16:04:55.299376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.160 [2024-12-12 16:04:55.299652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:29.160 [2024-12-12 16:04:55.299818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:29.160 [2024-12-12 16:04:55.299828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:29.160 [2024-12-12 16:04:55.300070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.160 pt2 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.160 "name": "raid_bdev1", 00:08:29.160 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:29.160 "strip_size_kb": 0, 00:08:29.160 "state": "online", 00:08:29.160 "raid_level": "raid1", 00:08:29.160 "superblock": true, 00:08:29.160 "num_base_bdevs": 2, 00:08:29.160 "num_base_bdevs_discovered": 1, 00:08:29.160 "num_base_bdevs_operational": 1, 00:08:29.160 "base_bdevs_list": [ 00:08:29.160 { 00:08:29.160 "name": null, 00:08:29.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.160 "is_configured": false, 00:08:29.160 "data_offset": 2048, 00:08:29.160 "data_size": 63488 00:08:29.160 }, 00:08:29.160 { 00:08:29.160 "name": "pt2", 00:08:29.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.160 "is_configured": true, 00:08:29.160 "data_offset": 2048, 00:08:29.160 "data_size": 63488 00:08:29.160 } 00:08:29.160 ] 00:08:29.160 }' 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.160 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.420 [2024-12-12 16:04:55.695535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.420 [2024-12-12 16:04:55.695714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.420 [2024-12-12 16:04:55.695851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.420 [2024-12-12 16:04:55.695954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.420 [2024-12-12 16:04:55.696036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.420 [2024-12-12 16:04:55.755482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:29.420 [2024-12-12 16:04:55.755678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.420 [2024-12-12 16:04:55.755734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:29.420 [2024-12-12 16:04:55.755772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.420 [2024-12-12 16:04:55.758733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.420 [2024-12-12 16:04:55.758825] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:29.420 [2024-12-12 16:04:55.758989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:29.420 [2024-12-12 16:04:55.759080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:29.420 [2024-12-12 16:04:55.759301] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:29.420 [2024-12-12 16:04:55.759366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.420 [2024-12-12 16:04:55.759430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:29.420 [2024-12-12 16:04:55.759547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.420 [2024-12-12 16:04:55.759737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:29.420 [2024-12-12 16:04:55.759787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.420 pt1 00:08:29.420 [2024-12-12 16:04:55.760143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:29.420 [2024-12-12 16:04:55.760329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:29.420 [2024-12-12 16:04:55.760345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:29.420 [2024-12-12 16:04:55.760532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.420 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.680 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.680 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.680 "name": "raid_bdev1", 00:08:29.680 "uuid": "5d3e05f2-aad7-4cc4-8377-9e613986c05c", 00:08:29.680 "strip_size_kb": 0, 00:08:29.680 "state": "online", 00:08:29.680 "raid_level": "raid1", 00:08:29.680 "superblock": true, 00:08:29.680 "num_base_bdevs": 2, 00:08:29.680 "num_base_bdevs_discovered": 1, 00:08:29.680 "num_base_bdevs_operational": 1, 00:08:29.680 "base_bdevs_list": [ 00:08:29.680 { 00:08:29.680 "name": null, 00:08:29.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.680 "is_configured": false, 00:08:29.680 "data_offset": 2048, 00:08:29.680 "data_size": 63488 00:08:29.680 }, 00:08:29.680 { 00:08:29.680 "name": "pt2", 00:08:29.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.680 "is_configured": true, 00:08:29.680 "data_offset": 2048, 00:08:29.680 "data_size": 63488 00:08:29.680 } 00:08:29.680 ] 00:08:29.680 }' 00:08:29.680 16:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.680 16:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.941 [2024-12-12 16:04:56.183050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5d3e05f2-aad7-4cc4-8377-9e613986c05c '!=' 5d3e05f2-aad7-4cc4-8377-9e613986c05c ']' 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65223 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65223 ']' 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65223 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65223 00:08:29.941 killing process with pid 65223 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65223' 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65223 00:08:29.941 [2024-12-12 16:04:56.261527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.941 16:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65223 00:08:29.941 [2024-12-12 16:04:56.261665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.941 [2024-12-12 16:04:56.261729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.941 [2024-12-12 16:04:56.261749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:30.200 [2024-12-12 16:04:56.521105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.582 16:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:31.582 00:08:31.582 real 0m6.423s 00:08:31.582 user 0m9.498s 00:08:31.582 sys 0m1.126s 00:08:31.583 ************************************ 00:08:31.583 END TEST raid_superblock_test 00:08:31.583 ************************************ 00:08:31.583 16:04:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.583 16:04:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.583 16:04:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:31.583 16:04:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.583 16:04:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.583 16:04:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.841 ************************************ 00:08:31.841 START TEST raid_read_error_test 00:08:31.841 ************************************ 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SbE0SiuoCu 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65553 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65553 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65553 ']' 00:08:31.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.841 16:04:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.841 [2024-12-12 16:04:58.044715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:31.841 [2024-12-12 16:04:58.044979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65553 ] 00:08:32.100 [2024-12-12 16:04:58.219832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.100 [2024-12-12 16:04:58.365823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.358 [2024-12-12 16:04:58.620648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.358 [2024-12-12 16:04:58.620868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 BaseBdev1_malloc 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 true 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.617 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 [2024-12-12 16:04:58.971425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.875 [2024-12-12 16:04:58.971614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.875 [2024-12-12 16:04:58.971666] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:32.875 [2024-12-12 16:04:58.971683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.875 [2024-12-12 16:04:58.974487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.875 [2024-12-12 16:04:58.974536] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.875 BaseBdev1 00:08:32.875 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.876 16:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.876 16:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.876 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.876 16:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 BaseBdev2_malloc 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 true 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 [2024-12-12 16:04:59.051394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.876 [2024-12-12 16:04:59.051576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.876 [2024-12-12 16:04:59.051606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.876 [2024-12-12 16:04:59.051620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.876 [2024-12-12 16:04:59.054505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.876 [2024-12-12 16:04:59.054552] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.876 BaseBdev2 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 [2024-12-12 16:04:59.063523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.876 [2024-12-12 16:04:59.065952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.876 [2024-12-12 16:04:59.066245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.876 [2024-12-12 16:04:59.066297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.876 [2024-12-12 16:04:59.066614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:32.876 [2024-12-12 16:04:59.066888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.876 [2024-12-12 16:04:59.066944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.876 [2024-12-12 16:04:59.067196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.876 "name": "raid_bdev1", 00:08:32.876 "uuid": "45aef41b-2b3a-42ed-a273-12a3a6b5f888", 00:08:32.876 "strip_size_kb": 0, 00:08:32.876 "state": "online", 00:08:32.876 "raid_level": "raid1", 00:08:32.876 "superblock": true, 00:08:32.876 "num_base_bdevs": 2, 00:08:32.876 "num_base_bdevs_discovered": 2, 00:08:32.876 "num_base_bdevs_operational": 2, 00:08:32.876 "base_bdevs_list": [ 00:08:32.876 { 00:08:32.876 "name": "BaseBdev1", 00:08:32.876 "uuid": "52cdbab5-0f4b-503c-a297-131272b99696", 00:08:32.876 "is_configured": true, 00:08:32.876 "data_offset": 2048, 00:08:32.876 "data_size": 63488 00:08:32.876 }, 00:08:32.876 { 00:08:32.876 "name": "BaseBdev2", 00:08:32.876 "uuid": "2fae0360-8361-5cc6-8e57-306a99b1d513", 00:08:32.876 "is_configured": true, 00:08:32.876 "data_offset": 2048, 00:08:32.876 "data_size": 63488 00:08:32.876 } 00:08:32.876 ] 00:08:32.876 }' 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.876 16:04:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.443 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:33.443 16:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:33.443 [2024-12-12 16:04:59.616245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.378 "name": "raid_bdev1", 00:08:34.378 "uuid": "45aef41b-2b3a-42ed-a273-12a3a6b5f888", 00:08:34.378 "strip_size_kb": 0, 00:08:34.378 "state": "online", 00:08:34.378 "raid_level": "raid1", 00:08:34.378 "superblock": true, 00:08:34.378 "num_base_bdevs": 2, 00:08:34.378 "num_base_bdevs_discovered": 2, 00:08:34.378 "num_base_bdevs_operational": 2, 00:08:34.378 "base_bdevs_list": [ 00:08:34.378 { 00:08:34.378 "name": "BaseBdev1", 00:08:34.378 "uuid": "52cdbab5-0f4b-503c-a297-131272b99696", 00:08:34.378 "is_configured": true, 00:08:34.378 "data_offset": 2048, 00:08:34.378 "data_size": 63488 00:08:34.378 }, 00:08:34.378 { 00:08:34.378 "name": "BaseBdev2", 00:08:34.378 "uuid": "2fae0360-8361-5cc6-8e57-306a99b1d513", 00:08:34.378 "is_configured": true, 00:08:34.378 "data_offset": 2048, 00:08:34.378 "data_size": 63488 00:08:34.378 } 00:08:34.378 ] 00:08:34.378 }' 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.378 16:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.942 [2024-12-12 16:05:01.017059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.942 [2024-12-12 16:05:01.017210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.942 [2024-12-12 16:05:01.020130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.942 [2024-12-12 16:05:01.020228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.942 [2024-12-12 16:05:01.020341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.942 [2024-12-12 16:05:01.020391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:34.942 { 00:08:34.942 "results": [ 00:08:34.942 { 00:08:34.942 "job": "raid_bdev1", 00:08:34.942 "core_mask": "0x1", 00:08:34.942 "workload": "randrw", 00:08:34.942 "percentage": 50, 00:08:34.942 "status": "finished", 00:08:34.942 "queue_depth": 1, 00:08:34.942 "io_size": 131072, 00:08:34.942 "runtime": 1.401383, 00:08:34.942 "iops": 12033.113003368815, 00:08:34.942 "mibps": 1504.1391254211019, 00:08:34.942 "io_failed": 0, 00:08:34.942 "io_timeout": 0, 00:08:34.942 "avg_latency_us": 80.10054124854626, 00:08:34.942 "min_latency_us": 26.1589519650655, 00:08:34.942 "max_latency_us": 1509.6174672489083 00:08:34.942 } 00:08:34.942 ], 00:08:34.942 "core_count": 1 00:08:34.942 } 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65553 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65553 ']' 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65553 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65553 00:08:34.942 killing process with pid 65553 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65553' 00:08:34.942 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65553 00:08:34.943 [2024-12-12 16:05:01.060987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.943 16:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65553 00:08:34.943 [2024-12-12 16:05:01.225047] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SbE0SiuoCu 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:36.317 ************************************ 00:08:36.317 END TEST raid_read_error_test 00:08:36.317 ************************************ 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:36.317 00:08:36.317 real 0m4.644s 00:08:36.317 user 0m5.473s 00:08:36.317 sys 0m0.610s 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.317 16:05:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.317 16:05:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:36.317 16:05:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.317 16:05:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.317 16:05:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.317 ************************************ 00:08:36.317 START TEST raid_write_error_test 00:08:36.317 ************************************ 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1pOSA10Zlt 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65699 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65699 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65699 ']' 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.317 16:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:36.576 [2024-12-12 16:05:02.750699] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:36.576 [2024-12-12 16:05:02.750822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65699 ] 00:08:36.576 [2024-12-12 16:05:02.925756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.835 [2024-12-12 16:05:03.063436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.094 [2024-12-12 16:05:03.303765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.094 [2024-12-12 16:05:03.303951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.353 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.353 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.354 BaseBdev1_malloc 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.354 true 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.354 [2024-12-12 16:05:03.691379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:37.354 [2024-12-12 16:05:03.691526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.354 [2024-12-12 16:05:03.691564] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:37.354 [2024-12-12 16:05:03.691595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.354 [2024-12-12 16:05:03.693969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.354 [2024-12-12 16:05:03.694044] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:37.354 BaseBdev1 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.354 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 BaseBdev2_malloc 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 true 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 [2024-12-12 16:05:03.752296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:37.613 [2024-12-12 16:05:03.752438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.613 [2024-12-12 16:05:03.752481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:37.613 [2024-12-12 16:05:03.752517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.613 [2024-12-12 16:05:03.754876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.613 [2024-12-12 16:05:03.754927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:37.613 BaseBdev2 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.613 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 [2024-12-12 16:05:03.760348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.613 [2024-12-12 16:05:03.762469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.613 [2024-12-12 16:05:03.762727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.613 [2024-12-12 16:05:03.762777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.613 [2024-12-12 16:05:03.763052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:37.614 [2024-12-12 16:05:03.763301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.614 [2024-12-12 16:05:03.763344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:37.614 [2024-12-12 16:05:03.763532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.614 "name": "raid_bdev1", 00:08:37.614 "uuid": "ea0c54d8-cb01-4723-8f54-a669f563433f", 00:08:37.614 "strip_size_kb": 0, 00:08:37.614 "state": "online", 00:08:37.614 "raid_level": "raid1", 00:08:37.614 "superblock": true, 00:08:37.614 "num_base_bdevs": 2, 00:08:37.614 "num_base_bdevs_discovered": 2, 00:08:37.614 "num_base_bdevs_operational": 2, 00:08:37.614 "base_bdevs_list": [ 00:08:37.614 { 00:08:37.614 "name": "BaseBdev1", 00:08:37.614 "uuid": "2c7697a5-815d-5b50-aa31-003537e15a5e", 00:08:37.614 "is_configured": true, 00:08:37.614 "data_offset": 2048, 00:08:37.614 "data_size": 63488 00:08:37.614 }, 00:08:37.614 { 00:08:37.614 "name": "BaseBdev2", 00:08:37.614 "uuid": "85ca2e81-362b-5797-84ba-b5abd1266336", 00:08:37.614 "is_configured": true, 00:08:37.614 "data_offset": 2048, 00:08:37.614 "data_size": 63488 00:08:37.614 } 00:08:37.614 ] 00:08:37.614 }' 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.614 16:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.920 16:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:37.920 16:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:38.179 [2024-12-12 16:05:04.344948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.118 [2024-12-12 16:05:05.240846] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:39.118 [2024-12-12 16:05:05.240947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.118 [2024-12-12 16:05:05.241159] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.118 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.118 "name": "raid_bdev1", 00:08:39.118 "uuid": "ea0c54d8-cb01-4723-8f54-a669f563433f", 00:08:39.118 "strip_size_kb": 0, 00:08:39.118 "state": "online", 00:08:39.118 "raid_level": "raid1", 00:08:39.118 "superblock": true, 00:08:39.118 "num_base_bdevs": 2, 00:08:39.118 "num_base_bdevs_discovered": 1, 00:08:39.118 "num_base_bdevs_operational": 1, 00:08:39.118 "base_bdevs_list": [ 00:08:39.118 { 00:08:39.118 "name": null, 00:08:39.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.118 "is_configured": false, 00:08:39.118 "data_offset": 0, 00:08:39.118 "data_size": 63488 00:08:39.118 }, 00:08:39.118 { 00:08:39.118 "name": "BaseBdev2", 00:08:39.118 "uuid": "85ca2e81-362b-5797-84ba-b5abd1266336", 00:08:39.118 "is_configured": true, 00:08:39.118 "data_offset": 2048, 00:08:39.118 "data_size": 63488 00:08:39.118 } 00:08:39.118 ] 00:08:39.118 }' 00:08:39.119 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.119 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.378 [2024-12-12 16:05:05.661839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.378 [2024-12-12 16:05:05.661995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.378 [2024-12-12 16:05:05.664810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.378 [2024-12-12 16:05:05.664912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.378 [2024-12-12 16:05:05.665016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.378 [2024-12-12 16:05:05.665066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:39.378 { 00:08:39.378 "results": [ 00:08:39.378 { 00:08:39.378 "job": "raid_bdev1", 00:08:39.378 "core_mask": "0x1", 00:08:39.378 "workload": "randrw", 00:08:39.378 "percentage": 50, 00:08:39.378 "status": "finished", 00:08:39.378 "queue_depth": 1, 00:08:39.378 "io_size": 131072, 00:08:39.378 "runtime": 1.317405, 00:08:39.378 "iops": 16520.356306526846, 00:08:39.378 "mibps": 2065.044538315856, 00:08:39.378 "io_failed": 0, 00:08:39.378 "io_timeout": 0, 00:08:39.378 "avg_latency_us": 57.88220088620365, 00:08:39.378 "min_latency_us": 22.805240174672488, 00:08:39.378 "max_latency_us": 1366.5257641921398 00:08:39.378 } 00:08:39.378 ], 00:08:39.378 "core_count": 1 00:08:39.378 } 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65699 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65699 ']' 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65699 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65699 00:08:39.378 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.379 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.379 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65699' 00:08:39.379 killing process with pid 65699 00:08:39.379 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65699 00:08:39.379 [2024-12-12 16:05:05.706410] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.379 16:05:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65699 00:08:39.638 [2024-12-12 16:05:05.853749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1pOSA10Zlt 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:41.019 16:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:41.019 00:08:41.020 real 0m4.541s 00:08:41.020 user 0m5.365s 00:08:41.020 sys 0m0.622s 00:08:41.020 16:05:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.020 16:05:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.020 ************************************ 00:08:41.020 END TEST raid_write_error_test 00:08:41.020 ************************************ 00:08:41.020 16:05:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:41.020 16:05:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:41.020 16:05:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:41.020 16:05:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:41.020 16:05:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.020 16:05:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.020 ************************************ 00:08:41.020 START TEST raid_state_function_test 00:08:41.020 ************************************ 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65837 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65837' 00:08:41.020 Process raid pid: 65837 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65837 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65837 ']' 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.020 16:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.020 [2024-12-12 16:05:07.346641] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:41.020 [2024-12-12 16:05:07.346833] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.280 [2024-12-12 16:05:07.518809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.539 [2024-12-12 16:05:07.654306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.798 [2024-12-12 16:05:07.914979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.798 [2024-12-12 16:05:07.915043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.058 [2024-12-12 16:05:08.195797] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.058 [2024-12-12 16:05:08.195875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.058 [2024-12-12 16:05:08.195887] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.058 [2024-12-12 16:05:08.195911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.058 [2024-12-12 16:05:08.195919] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.058 [2024-12-12 16:05:08.195946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.058 "name": "Existed_Raid", 00:08:42.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.058 "strip_size_kb": 64, 00:08:42.058 "state": "configuring", 00:08:42.058 "raid_level": "raid0", 00:08:42.058 "superblock": false, 00:08:42.058 "num_base_bdevs": 3, 00:08:42.058 "num_base_bdevs_discovered": 0, 00:08:42.058 "num_base_bdevs_operational": 3, 00:08:42.058 "base_bdevs_list": [ 00:08:42.058 { 00:08:42.058 "name": "BaseBdev1", 00:08:42.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.058 "is_configured": false, 00:08:42.058 "data_offset": 0, 00:08:42.058 "data_size": 0 00:08:42.058 }, 00:08:42.058 { 00:08:42.058 "name": "BaseBdev2", 00:08:42.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.058 "is_configured": false, 00:08:42.058 "data_offset": 0, 00:08:42.058 "data_size": 0 00:08:42.058 }, 00:08:42.058 { 00:08:42.058 "name": "BaseBdev3", 00:08:42.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.058 "is_configured": false, 00:08:42.058 "data_offset": 0, 00:08:42.058 "data_size": 0 00:08:42.058 } 00:08:42.058 ] 00:08:42.058 }' 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.058 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.627 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.627 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.627 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.628 [2024-12-12 16:05:08.686939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.628 [2024-12-12 16:05:08.687080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.628 [2024-12-12 16:05:08.698915] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.628 [2024-12-12 16:05:08.699050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.628 [2024-12-12 16:05:08.699079] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.628 [2024-12-12 16:05:08.699104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.628 [2024-12-12 16:05:08.699123] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.628 [2024-12-12 16:05:08.699146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.628 [2024-12-12 16:05:08.749933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.628 BaseBdev1 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.628 [ 00:08:42.628 { 00:08:42.628 "name": "BaseBdev1", 00:08:42.628 "aliases": [ 00:08:42.628 "6136de3d-6e2c-4fb8-b3c9-2a62ecd51474" 00:08:42.628 ], 00:08:42.628 "product_name": "Malloc disk", 00:08:42.628 "block_size": 512, 00:08:42.628 "num_blocks": 65536, 00:08:42.628 "uuid": "6136de3d-6e2c-4fb8-b3c9-2a62ecd51474", 00:08:42.628 "assigned_rate_limits": { 00:08:42.628 "rw_ios_per_sec": 0, 00:08:42.628 "rw_mbytes_per_sec": 0, 00:08:42.628 "r_mbytes_per_sec": 0, 00:08:42.628 "w_mbytes_per_sec": 0 00:08:42.628 }, 00:08:42.628 "claimed": true, 00:08:42.628 "claim_type": "exclusive_write", 00:08:42.628 "zoned": false, 00:08:42.628 "supported_io_types": { 00:08:42.628 "read": true, 00:08:42.628 "write": true, 00:08:42.628 "unmap": true, 00:08:42.628 "flush": true, 00:08:42.628 "reset": true, 00:08:42.628 "nvme_admin": false, 00:08:42.628 "nvme_io": false, 00:08:42.628 "nvme_io_md": false, 00:08:42.628 "write_zeroes": true, 00:08:42.628 "zcopy": true, 00:08:42.628 "get_zone_info": false, 00:08:42.628 "zone_management": false, 00:08:42.628 "zone_append": false, 00:08:42.628 "compare": false, 00:08:42.628 "compare_and_write": false, 00:08:42.628 "abort": true, 00:08:42.628 "seek_hole": false, 00:08:42.628 "seek_data": false, 00:08:42.628 "copy": true, 00:08:42.628 "nvme_iov_md": false 00:08:42.628 }, 00:08:42.628 "memory_domains": [ 00:08:42.628 { 00:08:42.628 "dma_device_id": "system", 00:08:42.628 "dma_device_type": 1 00:08:42.628 }, 00:08:42.628 { 00:08:42.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.628 "dma_device_type": 2 00:08:42.628 } 00:08:42.628 ], 00:08:42.628 "driver_specific": {} 00:08:42.628 } 00:08:42.628 ] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.628 "name": "Existed_Raid", 00:08:42.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.628 "strip_size_kb": 64, 00:08:42.628 "state": "configuring", 00:08:42.628 "raid_level": "raid0", 00:08:42.628 "superblock": false, 00:08:42.628 "num_base_bdevs": 3, 00:08:42.628 "num_base_bdevs_discovered": 1, 00:08:42.628 "num_base_bdevs_operational": 3, 00:08:42.628 "base_bdevs_list": [ 00:08:42.628 { 00:08:42.628 "name": "BaseBdev1", 00:08:42.628 "uuid": "6136de3d-6e2c-4fb8-b3c9-2a62ecd51474", 00:08:42.628 "is_configured": true, 00:08:42.628 "data_offset": 0, 00:08:42.628 "data_size": 65536 00:08:42.628 }, 00:08:42.628 { 00:08:42.628 "name": "BaseBdev2", 00:08:42.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.628 "is_configured": false, 00:08:42.628 "data_offset": 0, 00:08:42.628 "data_size": 0 00:08:42.628 }, 00:08:42.628 { 00:08:42.628 "name": "BaseBdev3", 00:08:42.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.628 "is_configured": false, 00:08:42.628 "data_offset": 0, 00:08:42.628 "data_size": 0 00:08:42.628 } 00:08:42.628 ] 00:08:42.628 }' 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.628 16:05:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 [2024-12-12 16:05:09.253159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.197 [2024-12-12 16:05:09.253247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 [2024-12-12 16:05:09.265154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.197 [2024-12-12 16:05:09.267330] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.197 [2024-12-12 16:05:09.267382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.197 [2024-12-12 16:05:09.267393] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.197 [2024-12-12 16:05:09.267402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.197 "name": "Existed_Raid", 00:08:43.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.197 "strip_size_kb": 64, 00:08:43.197 "state": "configuring", 00:08:43.197 "raid_level": "raid0", 00:08:43.197 "superblock": false, 00:08:43.197 "num_base_bdevs": 3, 00:08:43.197 "num_base_bdevs_discovered": 1, 00:08:43.197 "num_base_bdevs_operational": 3, 00:08:43.197 "base_bdevs_list": [ 00:08:43.197 { 00:08:43.197 "name": "BaseBdev1", 00:08:43.197 "uuid": "6136de3d-6e2c-4fb8-b3c9-2a62ecd51474", 00:08:43.197 "is_configured": true, 00:08:43.197 "data_offset": 0, 00:08:43.197 "data_size": 65536 00:08:43.197 }, 00:08:43.197 { 00:08:43.197 "name": "BaseBdev2", 00:08:43.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.197 "is_configured": false, 00:08:43.197 "data_offset": 0, 00:08:43.197 "data_size": 0 00:08:43.197 }, 00:08:43.197 { 00:08:43.197 "name": "BaseBdev3", 00:08:43.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.197 "is_configured": false, 00:08:43.197 "data_offset": 0, 00:08:43.197 "data_size": 0 00:08:43.197 } 00:08:43.197 ] 00:08:43.197 }' 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.197 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.456 [2024-12-12 16:05:09.762278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.456 BaseBdev2 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.456 [ 00:08:43.456 { 00:08:43.456 "name": "BaseBdev2", 00:08:43.456 "aliases": [ 00:08:43.456 "46f8663e-60bd-48e9-8599-b954f32f11c6" 00:08:43.456 ], 00:08:43.456 "product_name": "Malloc disk", 00:08:43.456 "block_size": 512, 00:08:43.456 "num_blocks": 65536, 00:08:43.456 "uuid": "46f8663e-60bd-48e9-8599-b954f32f11c6", 00:08:43.456 "assigned_rate_limits": { 00:08:43.456 "rw_ios_per_sec": 0, 00:08:43.456 "rw_mbytes_per_sec": 0, 00:08:43.456 "r_mbytes_per_sec": 0, 00:08:43.456 "w_mbytes_per_sec": 0 00:08:43.456 }, 00:08:43.456 "claimed": true, 00:08:43.456 "claim_type": "exclusive_write", 00:08:43.456 "zoned": false, 00:08:43.456 "supported_io_types": { 00:08:43.456 "read": true, 00:08:43.456 "write": true, 00:08:43.456 "unmap": true, 00:08:43.456 "flush": true, 00:08:43.456 "reset": true, 00:08:43.456 "nvme_admin": false, 00:08:43.456 "nvme_io": false, 00:08:43.456 "nvme_io_md": false, 00:08:43.456 "write_zeroes": true, 00:08:43.456 "zcopy": true, 00:08:43.456 "get_zone_info": false, 00:08:43.456 "zone_management": false, 00:08:43.456 "zone_append": false, 00:08:43.456 "compare": false, 00:08:43.456 "compare_and_write": false, 00:08:43.456 "abort": true, 00:08:43.456 "seek_hole": false, 00:08:43.456 "seek_data": false, 00:08:43.456 "copy": true, 00:08:43.456 "nvme_iov_md": false 00:08:43.456 }, 00:08:43.456 "memory_domains": [ 00:08:43.456 { 00:08:43.456 "dma_device_id": "system", 00:08:43.456 "dma_device_type": 1 00:08:43.456 }, 00:08:43.456 { 00:08:43.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.456 "dma_device_type": 2 00:08:43.456 } 00:08:43.456 ], 00:08:43.456 "driver_specific": {} 00:08:43.456 } 00:08:43.456 ] 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.456 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.716 "name": "Existed_Raid", 00:08:43.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.716 "strip_size_kb": 64, 00:08:43.716 "state": "configuring", 00:08:43.716 "raid_level": "raid0", 00:08:43.716 "superblock": false, 00:08:43.716 "num_base_bdevs": 3, 00:08:43.716 "num_base_bdevs_discovered": 2, 00:08:43.716 "num_base_bdevs_operational": 3, 00:08:43.716 "base_bdevs_list": [ 00:08:43.716 { 00:08:43.716 "name": "BaseBdev1", 00:08:43.716 "uuid": "6136de3d-6e2c-4fb8-b3c9-2a62ecd51474", 00:08:43.716 "is_configured": true, 00:08:43.716 "data_offset": 0, 00:08:43.716 "data_size": 65536 00:08:43.716 }, 00:08:43.716 { 00:08:43.716 "name": "BaseBdev2", 00:08:43.716 "uuid": "46f8663e-60bd-48e9-8599-b954f32f11c6", 00:08:43.716 "is_configured": true, 00:08:43.716 "data_offset": 0, 00:08:43.716 "data_size": 65536 00:08:43.716 }, 00:08:43.716 { 00:08:43.716 "name": "BaseBdev3", 00:08:43.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.716 "is_configured": false, 00:08:43.716 "data_offset": 0, 00:08:43.716 "data_size": 0 00:08:43.716 } 00:08:43.716 ] 00:08:43.716 }' 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.716 16:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.975 [2024-12-12 16:05:10.319910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.975 [2024-12-12 16:05:10.320094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.975 [2024-12-12 16:05:10.320117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:43.975 [2024-12-12 16:05:10.320451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:43.975 [2024-12-12 16:05:10.320662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.975 [2024-12-12 16:05:10.320673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:43.975 [2024-12-12 16:05:10.321031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.975 BaseBdev3 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.975 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.234 [ 00:08:44.234 { 00:08:44.234 "name": "BaseBdev3", 00:08:44.234 "aliases": [ 00:08:44.234 "e2beb58b-c065-4632-9c66-3d5e85c33f76" 00:08:44.234 ], 00:08:44.234 "product_name": "Malloc disk", 00:08:44.234 "block_size": 512, 00:08:44.234 "num_blocks": 65536, 00:08:44.234 "uuid": "e2beb58b-c065-4632-9c66-3d5e85c33f76", 00:08:44.234 "assigned_rate_limits": { 00:08:44.234 "rw_ios_per_sec": 0, 00:08:44.234 "rw_mbytes_per_sec": 0, 00:08:44.234 "r_mbytes_per_sec": 0, 00:08:44.234 "w_mbytes_per_sec": 0 00:08:44.234 }, 00:08:44.234 "claimed": true, 00:08:44.234 "claim_type": "exclusive_write", 00:08:44.234 "zoned": false, 00:08:44.234 "supported_io_types": { 00:08:44.234 "read": true, 00:08:44.234 "write": true, 00:08:44.234 "unmap": true, 00:08:44.234 "flush": true, 00:08:44.234 "reset": true, 00:08:44.234 "nvme_admin": false, 00:08:44.234 "nvme_io": false, 00:08:44.234 "nvme_io_md": false, 00:08:44.234 "write_zeroes": true, 00:08:44.234 "zcopy": true, 00:08:44.234 "get_zone_info": false, 00:08:44.234 "zone_management": false, 00:08:44.234 "zone_append": false, 00:08:44.234 "compare": false, 00:08:44.234 "compare_and_write": false, 00:08:44.234 "abort": true, 00:08:44.234 "seek_hole": false, 00:08:44.234 "seek_data": false, 00:08:44.234 "copy": true, 00:08:44.234 "nvme_iov_md": false 00:08:44.234 }, 00:08:44.234 "memory_domains": [ 00:08:44.234 { 00:08:44.234 "dma_device_id": "system", 00:08:44.234 "dma_device_type": 1 00:08:44.234 }, 00:08:44.234 { 00:08:44.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.234 "dma_device_type": 2 00:08:44.234 } 00:08:44.234 ], 00:08:44.234 "driver_specific": {} 00:08:44.234 } 00:08:44.234 ] 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:44.234 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.235 "name": "Existed_Raid", 00:08:44.235 "uuid": "2df44bb9-307b-4990-b434-7666437b553f", 00:08:44.235 "strip_size_kb": 64, 00:08:44.235 "state": "online", 00:08:44.235 "raid_level": "raid0", 00:08:44.235 "superblock": false, 00:08:44.235 "num_base_bdevs": 3, 00:08:44.235 "num_base_bdevs_discovered": 3, 00:08:44.235 "num_base_bdevs_operational": 3, 00:08:44.235 "base_bdevs_list": [ 00:08:44.235 { 00:08:44.235 "name": "BaseBdev1", 00:08:44.235 "uuid": "6136de3d-6e2c-4fb8-b3c9-2a62ecd51474", 00:08:44.235 "is_configured": true, 00:08:44.235 "data_offset": 0, 00:08:44.235 "data_size": 65536 00:08:44.235 }, 00:08:44.235 { 00:08:44.235 "name": "BaseBdev2", 00:08:44.235 "uuid": "46f8663e-60bd-48e9-8599-b954f32f11c6", 00:08:44.235 "is_configured": true, 00:08:44.235 "data_offset": 0, 00:08:44.235 "data_size": 65536 00:08:44.235 }, 00:08:44.235 { 00:08:44.235 "name": "BaseBdev3", 00:08:44.235 "uuid": "e2beb58b-c065-4632-9c66-3d5e85c33f76", 00:08:44.235 "is_configured": true, 00:08:44.235 "data_offset": 0, 00:08:44.235 "data_size": 65536 00:08:44.235 } 00:08:44.235 ] 00:08:44.235 }' 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.235 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.494 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.494 [2024-12-12 16:05:10.823509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.753 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.753 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.753 "name": "Existed_Raid", 00:08:44.753 "aliases": [ 00:08:44.753 "2df44bb9-307b-4990-b434-7666437b553f" 00:08:44.753 ], 00:08:44.753 "product_name": "Raid Volume", 00:08:44.753 "block_size": 512, 00:08:44.753 "num_blocks": 196608, 00:08:44.753 "uuid": "2df44bb9-307b-4990-b434-7666437b553f", 00:08:44.753 "assigned_rate_limits": { 00:08:44.753 "rw_ios_per_sec": 0, 00:08:44.753 "rw_mbytes_per_sec": 0, 00:08:44.753 "r_mbytes_per_sec": 0, 00:08:44.753 "w_mbytes_per_sec": 0 00:08:44.753 }, 00:08:44.753 "claimed": false, 00:08:44.753 "zoned": false, 00:08:44.753 "supported_io_types": { 00:08:44.753 "read": true, 00:08:44.753 "write": true, 00:08:44.753 "unmap": true, 00:08:44.753 "flush": true, 00:08:44.753 "reset": true, 00:08:44.753 "nvme_admin": false, 00:08:44.753 "nvme_io": false, 00:08:44.753 "nvme_io_md": false, 00:08:44.753 "write_zeroes": true, 00:08:44.753 "zcopy": false, 00:08:44.753 "get_zone_info": false, 00:08:44.753 "zone_management": false, 00:08:44.753 "zone_append": false, 00:08:44.753 "compare": false, 00:08:44.753 "compare_and_write": false, 00:08:44.753 "abort": false, 00:08:44.753 "seek_hole": false, 00:08:44.753 "seek_data": false, 00:08:44.753 "copy": false, 00:08:44.753 "nvme_iov_md": false 00:08:44.753 }, 00:08:44.753 "memory_domains": [ 00:08:44.753 { 00:08:44.753 "dma_device_id": "system", 00:08:44.753 "dma_device_type": 1 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.753 "dma_device_type": 2 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "dma_device_id": "system", 00:08:44.753 "dma_device_type": 1 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.753 "dma_device_type": 2 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "dma_device_id": "system", 00:08:44.753 "dma_device_type": 1 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.753 "dma_device_type": 2 00:08:44.753 } 00:08:44.753 ], 00:08:44.753 "driver_specific": { 00:08:44.753 "raid": { 00:08:44.753 "uuid": "2df44bb9-307b-4990-b434-7666437b553f", 00:08:44.753 "strip_size_kb": 64, 00:08:44.753 "state": "online", 00:08:44.753 "raid_level": "raid0", 00:08:44.753 "superblock": false, 00:08:44.753 "num_base_bdevs": 3, 00:08:44.753 "num_base_bdevs_discovered": 3, 00:08:44.753 "num_base_bdevs_operational": 3, 00:08:44.753 "base_bdevs_list": [ 00:08:44.753 { 00:08:44.753 "name": "BaseBdev1", 00:08:44.753 "uuid": "6136de3d-6e2c-4fb8-b3c9-2a62ecd51474", 00:08:44.753 "is_configured": true, 00:08:44.754 "data_offset": 0, 00:08:44.754 "data_size": 65536 00:08:44.754 }, 00:08:44.754 { 00:08:44.754 "name": "BaseBdev2", 00:08:44.754 "uuid": "46f8663e-60bd-48e9-8599-b954f32f11c6", 00:08:44.754 "is_configured": true, 00:08:44.754 "data_offset": 0, 00:08:44.754 "data_size": 65536 00:08:44.754 }, 00:08:44.754 { 00:08:44.754 "name": "BaseBdev3", 00:08:44.754 "uuid": "e2beb58b-c065-4632-9c66-3d5e85c33f76", 00:08:44.754 "is_configured": true, 00:08:44.754 "data_offset": 0, 00:08:44.754 "data_size": 65536 00:08:44.754 } 00:08:44.754 ] 00:08:44.754 } 00:08:44.754 } 00:08:44.754 }' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:44.754 BaseBdev2 00:08:44.754 BaseBdev3' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.754 16:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.754 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.754 [2024-12-12 16:05:11.070770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.754 [2024-12-12 16:05:11.070928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.754 [2024-12-12 16:05:11.071020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:45.013 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.014 "name": "Existed_Raid", 00:08:45.014 "uuid": "2df44bb9-307b-4990-b434-7666437b553f", 00:08:45.014 "strip_size_kb": 64, 00:08:45.014 "state": "offline", 00:08:45.014 "raid_level": "raid0", 00:08:45.014 "superblock": false, 00:08:45.014 "num_base_bdevs": 3, 00:08:45.014 "num_base_bdevs_discovered": 2, 00:08:45.014 "num_base_bdevs_operational": 2, 00:08:45.014 "base_bdevs_list": [ 00:08:45.014 { 00:08:45.014 "name": null, 00:08:45.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.014 "is_configured": false, 00:08:45.014 "data_offset": 0, 00:08:45.014 "data_size": 65536 00:08:45.014 }, 00:08:45.014 { 00:08:45.014 "name": "BaseBdev2", 00:08:45.014 "uuid": "46f8663e-60bd-48e9-8599-b954f32f11c6", 00:08:45.014 "is_configured": true, 00:08:45.014 "data_offset": 0, 00:08:45.014 "data_size": 65536 00:08:45.014 }, 00:08:45.014 { 00:08:45.014 "name": "BaseBdev3", 00:08:45.014 "uuid": "e2beb58b-c065-4632-9c66-3d5e85c33f76", 00:08:45.014 "is_configured": true, 00:08:45.014 "data_offset": 0, 00:08:45.014 "data_size": 65536 00:08:45.014 } 00:08:45.014 ] 00:08:45.014 }' 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.014 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.273 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:45.273 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.273 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.273 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.273 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.273 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.273 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.533 [2024-12-12 16:05:11.633765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.533 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.533 [2024-12-12 16:05:11.798803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:45.533 [2024-12-12 16:05:11.798875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.793 16:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 BaseBdev2 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 [ 00:08:45.793 { 00:08:45.793 "name": "BaseBdev2", 00:08:45.793 "aliases": [ 00:08:45.793 "832859cf-6a19-4636-a01e-fbe0cb9eba5d" 00:08:45.793 ], 00:08:45.793 "product_name": "Malloc disk", 00:08:45.793 "block_size": 512, 00:08:45.793 "num_blocks": 65536, 00:08:45.793 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:45.793 "assigned_rate_limits": { 00:08:45.793 "rw_ios_per_sec": 0, 00:08:45.793 "rw_mbytes_per_sec": 0, 00:08:45.793 "r_mbytes_per_sec": 0, 00:08:45.793 "w_mbytes_per_sec": 0 00:08:45.793 }, 00:08:45.793 "claimed": false, 00:08:45.793 "zoned": false, 00:08:45.793 "supported_io_types": { 00:08:45.793 "read": true, 00:08:45.793 "write": true, 00:08:45.793 "unmap": true, 00:08:45.793 "flush": true, 00:08:45.793 "reset": true, 00:08:45.793 "nvme_admin": false, 00:08:45.793 "nvme_io": false, 00:08:45.793 "nvme_io_md": false, 00:08:45.793 "write_zeroes": true, 00:08:45.793 "zcopy": true, 00:08:45.793 "get_zone_info": false, 00:08:45.793 "zone_management": false, 00:08:45.793 "zone_append": false, 00:08:45.793 "compare": false, 00:08:45.793 "compare_and_write": false, 00:08:45.793 "abort": true, 00:08:45.793 "seek_hole": false, 00:08:45.793 "seek_data": false, 00:08:45.793 "copy": true, 00:08:45.793 "nvme_iov_md": false 00:08:45.793 }, 00:08:45.793 "memory_domains": [ 00:08:45.793 { 00:08:45.793 "dma_device_id": "system", 00:08:45.793 "dma_device_type": 1 00:08:45.793 }, 00:08:45.793 { 00:08:45.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.793 "dma_device_type": 2 00:08:45.793 } 00:08:45.793 ], 00:08:45.793 "driver_specific": {} 00:08:45.793 } 00:08:45.793 ] 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 BaseBdev3 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:45.793 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.794 [ 00:08:45.794 { 00:08:45.794 "name": "BaseBdev3", 00:08:45.794 "aliases": [ 00:08:45.794 "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375" 00:08:45.794 ], 00:08:45.794 "product_name": "Malloc disk", 00:08:45.794 "block_size": 512, 00:08:45.794 "num_blocks": 65536, 00:08:45.794 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:45.794 "assigned_rate_limits": { 00:08:45.794 "rw_ios_per_sec": 0, 00:08:45.794 "rw_mbytes_per_sec": 0, 00:08:45.794 "r_mbytes_per_sec": 0, 00:08:45.794 "w_mbytes_per_sec": 0 00:08:45.794 }, 00:08:45.794 "claimed": false, 00:08:45.794 "zoned": false, 00:08:45.794 "supported_io_types": { 00:08:45.794 "read": true, 00:08:45.794 "write": true, 00:08:45.794 "unmap": true, 00:08:45.794 "flush": true, 00:08:45.794 "reset": true, 00:08:45.794 "nvme_admin": false, 00:08:45.794 "nvme_io": false, 00:08:45.794 "nvme_io_md": false, 00:08:45.794 "write_zeroes": true, 00:08:45.794 "zcopy": true, 00:08:45.794 "get_zone_info": false, 00:08:45.794 "zone_management": false, 00:08:45.794 "zone_append": false, 00:08:45.794 "compare": false, 00:08:45.794 "compare_and_write": false, 00:08:45.794 "abort": true, 00:08:45.794 "seek_hole": false, 00:08:45.794 "seek_data": false, 00:08:45.794 "copy": true, 00:08:45.794 "nvme_iov_md": false 00:08:45.794 }, 00:08:45.794 "memory_domains": [ 00:08:45.794 { 00:08:45.794 "dma_device_id": "system", 00:08:45.794 "dma_device_type": 1 00:08:45.794 }, 00:08:45.794 { 00:08:45.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.794 "dma_device_type": 2 00:08:45.794 } 00:08:45.794 ], 00:08:45.794 "driver_specific": {} 00:08:45.794 } 00:08:45.794 ] 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.794 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.794 [2024-12-12 16:05:12.142703] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.794 [2024-12-12 16:05:12.142847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.794 [2024-12-12 16:05:12.142902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.056 [2024-12-12 16:05:12.145045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.056 "name": "Existed_Raid", 00:08:46.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.056 "strip_size_kb": 64, 00:08:46.056 "state": "configuring", 00:08:46.056 "raid_level": "raid0", 00:08:46.056 "superblock": false, 00:08:46.056 "num_base_bdevs": 3, 00:08:46.056 "num_base_bdevs_discovered": 2, 00:08:46.056 "num_base_bdevs_operational": 3, 00:08:46.056 "base_bdevs_list": [ 00:08:46.056 { 00:08:46.056 "name": "BaseBdev1", 00:08:46.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.056 "is_configured": false, 00:08:46.056 "data_offset": 0, 00:08:46.056 "data_size": 0 00:08:46.056 }, 00:08:46.056 { 00:08:46.056 "name": "BaseBdev2", 00:08:46.056 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:46.056 "is_configured": true, 00:08:46.056 "data_offset": 0, 00:08:46.056 "data_size": 65536 00:08:46.056 }, 00:08:46.056 { 00:08:46.056 "name": "BaseBdev3", 00:08:46.056 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:46.056 "is_configured": true, 00:08:46.056 "data_offset": 0, 00:08:46.056 "data_size": 65536 00:08:46.056 } 00:08:46.056 ] 00:08:46.056 }' 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.056 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.319 [2024-12-12 16:05:12.586004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.319 "name": "Existed_Raid", 00:08:46.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.319 "strip_size_kb": 64, 00:08:46.319 "state": "configuring", 00:08:46.319 "raid_level": "raid0", 00:08:46.319 "superblock": false, 00:08:46.319 "num_base_bdevs": 3, 00:08:46.319 "num_base_bdevs_discovered": 1, 00:08:46.319 "num_base_bdevs_operational": 3, 00:08:46.319 "base_bdevs_list": [ 00:08:46.319 { 00:08:46.319 "name": "BaseBdev1", 00:08:46.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.319 "is_configured": false, 00:08:46.319 "data_offset": 0, 00:08:46.319 "data_size": 0 00:08:46.319 }, 00:08:46.319 { 00:08:46.319 "name": null, 00:08:46.319 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:46.319 "is_configured": false, 00:08:46.319 "data_offset": 0, 00:08:46.319 "data_size": 65536 00:08:46.319 }, 00:08:46.319 { 00:08:46.319 "name": "BaseBdev3", 00:08:46.319 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:46.319 "is_configured": true, 00:08:46.319 "data_offset": 0, 00:08:46.319 "data_size": 65536 00:08:46.319 } 00:08:46.319 ] 00:08:46.319 }' 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.319 16:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 [2024-12-12 16:05:13.116156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.887 BaseBdev1 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.887 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 [ 00:08:46.887 { 00:08:46.887 "name": "BaseBdev1", 00:08:46.887 "aliases": [ 00:08:46.887 "23399e5e-2da6-4451-b44e-378df4dc2085" 00:08:46.887 ], 00:08:46.887 "product_name": "Malloc disk", 00:08:46.887 "block_size": 512, 00:08:46.887 "num_blocks": 65536, 00:08:46.887 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:46.887 "assigned_rate_limits": { 00:08:46.887 "rw_ios_per_sec": 0, 00:08:46.887 "rw_mbytes_per_sec": 0, 00:08:46.887 "r_mbytes_per_sec": 0, 00:08:46.887 "w_mbytes_per_sec": 0 00:08:46.887 }, 00:08:46.887 "claimed": true, 00:08:46.887 "claim_type": "exclusive_write", 00:08:46.887 "zoned": false, 00:08:46.887 "supported_io_types": { 00:08:46.887 "read": true, 00:08:46.887 "write": true, 00:08:46.887 "unmap": true, 00:08:46.887 "flush": true, 00:08:46.887 "reset": true, 00:08:46.887 "nvme_admin": false, 00:08:46.887 "nvme_io": false, 00:08:46.887 "nvme_io_md": false, 00:08:46.887 "write_zeroes": true, 00:08:46.887 "zcopy": true, 00:08:46.887 "get_zone_info": false, 00:08:46.887 "zone_management": false, 00:08:46.887 "zone_append": false, 00:08:46.887 "compare": false, 00:08:46.888 "compare_and_write": false, 00:08:46.888 "abort": true, 00:08:46.888 "seek_hole": false, 00:08:46.888 "seek_data": false, 00:08:46.888 "copy": true, 00:08:46.888 "nvme_iov_md": false 00:08:46.888 }, 00:08:46.888 "memory_domains": [ 00:08:46.888 { 00:08:46.888 "dma_device_id": "system", 00:08:46.888 "dma_device_type": 1 00:08:46.888 }, 00:08:46.888 { 00:08:46.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.888 "dma_device_type": 2 00:08:46.888 } 00:08:46.888 ], 00:08:46.888 "driver_specific": {} 00:08:46.888 } 00:08:46.888 ] 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.888 "name": "Existed_Raid", 00:08:46.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.888 "strip_size_kb": 64, 00:08:46.888 "state": "configuring", 00:08:46.888 "raid_level": "raid0", 00:08:46.888 "superblock": false, 00:08:46.888 "num_base_bdevs": 3, 00:08:46.888 "num_base_bdevs_discovered": 2, 00:08:46.888 "num_base_bdevs_operational": 3, 00:08:46.888 "base_bdevs_list": [ 00:08:46.888 { 00:08:46.888 "name": "BaseBdev1", 00:08:46.888 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:46.888 "is_configured": true, 00:08:46.888 "data_offset": 0, 00:08:46.888 "data_size": 65536 00:08:46.888 }, 00:08:46.888 { 00:08:46.888 "name": null, 00:08:46.888 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:46.888 "is_configured": false, 00:08:46.888 "data_offset": 0, 00:08:46.888 "data_size": 65536 00:08:46.888 }, 00:08:46.888 { 00:08:46.888 "name": "BaseBdev3", 00:08:46.888 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:46.888 "is_configured": true, 00:08:46.888 "data_offset": 0, 00:08:46.888 "data_size": 65536 00:08:46.888 } 00:08:46.888 ] 00:08:46.888 }' 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.888 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.456 [2024-12-12 16:05:13.651450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.456 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.457 "name": "Existed_Raid", 00:08:47.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.457 "strip_size_kb": 64, 00:08:47.457 "state": "configuring", 00:08:47.457 "raid_level": "raid0", 00:08:47.457 "superblock": false, 00:08:47.457 "num_base_bdevs": 3, 00:08:47.457 "num_base_bdevs_discovered": 1, 00:08:47.457 "num_base_bdevs_operational": 3, 00:08:47.457 "base_bdevs_list": [ 00:08:47.457 { 00:08:47.457 "name": "BaseBdev1", 00:08:47.457 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:47.457 "is_configured": true, 00:08:47.457 "data_offset": 0, 00:08:47.457 "data_size": 65536 00:08:47.457 }, 00:08:47.457 { 00:08:47.457 "name": null, 00:08:47.457 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:47.457 "is_configured": false, 00:08:47.457 "data_offset": 0, 00:08:47.457 "data_size": 65536 00:08:47.457 }, 00:08:47.457 { 00:08:47.457 "name": null, 00:08:47.457 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:47.457 "is_configured": false, 00:08:47.457 "data_offset": 0, 00:08:47.457 "data_size": 65536 00:08:47.457 } 00:08:47.457 ] 00:08:47.457 }' 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.457 16:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.716 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.716 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.716 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.716 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.975 [2024-12-12 16:05:14.098680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.975 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.975 "name": "Existed_Raid", 00:08:47.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.975 "strip_size_kb": 64, 00:08:47.975 "state": "configuring", 00:08:47.975 "raid_level": "raid0", 00:08:47.975 "superblock": false, 00:08:47.975 "num_base_bdevs": 3, 00:08:47.976 "num_base_bdevs_discovered": 2, 00:08:47.976 "num_base_bdevs_operational": 3, 00:08:47.976 "base_bdevs_list": [ 00:08:47.976 { 00:08:47.976 "name": "BaseBdev1", 00:08:47.976 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:47.976 "is_configured": true, 00:08:47.976 "data_offset": 0, 00:08:47.976 "data_size": 65536 00:08:47.976 }, 00:08:47.976 { 00:08:47.976 "name": null, 00:08:47.976 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:47.976 "is_configured": false, 00:08:47.976 "data_offset": 0, 00:08:47.976 "data_size": 65536 00:08:47.976 }, 00:08:47.976 { 00:08:47.976 "name": "BaseBdev3", 00:08:47.976 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:47.976 "is_configured": true, 00:08:47.976 "data_offset": 0, 00:08:47.976 "data_size": 65536 00:08:47.976 } 00:08:47.976 ] 00:08:47.976 }' 00:08:47.976 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.976 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.235 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:48.235 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.235 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.235 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.235 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.494 [2024-12-12 16:05:14.593869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.494 "name": "Existed_Raid", 00:08:48.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.494 "strip_size_kb": 64, 00:08:48.494 "state": "configuring", 00:08:48.494 "raid_level": "raid0", 00:08:48.494 "superblock": false, 00:08:48.494 "num_base_bdevs": 3, 00:08:48.494 "num_base_bdevs_discovered": 1, 00:08:48.494 "num_base_bdevs_operational": 3, 00:08:48.494 "base_bdevs_list": [ 00:08:48.494 { 00:08:48.494 "name": null, 00:08:48.494 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:48.494 "is_configured": false, 00:08:48.494 "data_offset": 0, 00:08:48.494 "data_size": 65536 00:08:48.494 }, 00:08:48.494 { 00:08:48.494 "name": null, 00:08:48.494 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:48.494 "is_configured": false, 00:08:48.494 "data_offset": 0, 00:08:48.494 "data_size": 65536 00:08:48.494 }, 00:08:48.494 { 00:08:48.494 "name": "BaseBdev3", 00:08:48.494 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:48.494 "is_configured": true, 00:08:48.494 "data_offset": 0, 00:08:48.494 "data_size": 65536 00:08:48.494 } 00:08:48.494 ] 00:08:48.494 }' 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.494 16:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.062 [2024-12-12 16:05:15.161209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.062 "name": "Existed_Raid", 00:08:49.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.062 "strip_size_kb": 64, 00:08:49.062 "state": "configuring", 00:08:49.062 "raid_level": "raid0", 00:08:49.062 "superblock": false, 00:08:49.062 "num_base_bdevs": 3, 00:08:49.062 "num_base_bdevs_discovered": 2, 00:08:49.062 "num_base_bdevs_operational": 3, 00:08:49.062 "base_bdevs_list": [ 00:08:49.062 { 00:08:49.062 "name": null, 00:08:49.062 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:49.062 "is_configured": false, 00:08:49.062 "data_offset": 0, 00:08:49.062 "data_size": 65536 00:08:49.062 }, 00:08:49.062 { 00:08:49.062 "name": "BaseBdev2", 00:08:49.062 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:49.062 "is_configured": true, 00:08:49.062 "data_offset": 0, 00:08:49.062 "data_size": 65536 00:08:49.062 }, 00:08:49.062 { 00:08:49.062 "name": "BaseBdev3", 00:08:49.062 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:49.062 "is_configured": true, 00:08:49.062 "data_offset": 0, 00:08:49.062 "data_size": 65536 00:08:49.062 } 00:08:49.062 ] 00:08:49.062 }' 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.062 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 23399e5e-2da6-4451-b44e-378df4dc2085 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.581 [2024-12-12 16:05:15.728385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:49.581 [2024-12-12 16:05:15.728530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:49.581 [2024-12-12 16:05:15.728560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:49.581 [2024-12-12 16:05:15.728876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.581 [2024-12-12 16:05:15.729131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:49.581 [2024-12-12 16:05:15.729174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:49.581 [2024-12-12 16:05:15.729513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.581 NewBaseBdev 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.581 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.581 [ 00:08:49.581 { 00:08:49.581 "name": "NewBaseBdev", 00:08:49.581 "aliases": [ 00:08:49.581 "23399e5e-2da6-4451-b44e-378df4dc2085" 00:08:49.581 ], 00:08:49.581 "product_name": "Malloc disk", 00:08:49.581 "block_size": 512, 00:08:49.581 "num_blocks": 65536, 00:08:49.581 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:49.581 "assigned_rate_limits": { 00:08:49.581 "rw_ios_per_sec": 0, 00:08:49.581 "rw_mbytes_per_sec": 0, 00:08:49.581 "r_mbytes_per_sec": 0, 00:08:49.581 "w_mbytes_per_sec": 0 00:08:49.582 }, 00:08:49.582 "claimed": true, 00:08:49.582 "claim_type": "exclusive_write", 00:08:49.582 "zoned": false, 00:08:49.582 "supported_io_types": { 00:08:49.582 "read": true, 00:08:49.582 "write": true, 00:08:49.582 "unmap": true, 00:08:49.582 "flush": true, 00:08:49.582 "reset": true, 00:08:49.582 "nvme_admin": false, 00:08:49.582 "nvme_io": false, 00:08:49.582 "nvme_io_md": false, 00:08:49.582 "write_zeroes": true, 00:08:49.582 "zcopy": true, 00:08:49.582 "get_zone_info": false, 00:08:49.582 "zone_management": false, 00:08:49.582 "zone_append": false, 00:08:49.582 "compare": false, 00:08:49.582 "compare_and_write": false, 00:08:49.582 "abort": true, 00:08:49.582 "seek_hole": false, 00:08:49.582 "seek_data": false, 00:08:49.582 "copy": true, 00:08:49.582 "nvme_iov_md": false 00:08:49.582 }, 00:08:49.582 "memory_domains": [ 00:08:49.582 { 00:08:49.582 "dma_device_id": "system", 00:08:49.582 "dma_device_type": 1 00:08:49.582 }, 00:08:49.582 { 00:08:49.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.582 "dma_device_type": 2 00:08:49.582 } 00:08:49.582 ], 00:08:49.582 "driver_specific": {} 00:08:49.582 } 00:08:49.582 ] 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.582 "name": "Existed_Raid", 00:08:49.582 "uuid": "c5c6d51e-e9e4-44a4-aee2-f1aad44d5144", 00:08:49.582 "strip_size_kb": 64, 00:08:49.582 "state": "online", 00:08:49.582 "raid_level": "raid0", 00:08:49.582 "superblock": false, 00:08:49.582 "num_base_bdevs": 3, 00:08:49.582 "num_base_bdevs_discovered": 3, 00:08:49.582 "num_base_bdevs_operational": 3, 00:08:49.582 "base_bdevs_list": [ 00:08:49.582 { 00:08:49.582 "name": "NewBaseBdev", 00:08:49.582 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:49.582 "is_configured": true, 00:08:49.582 "data_offset": 0, 00:08:49.582 "data_size": 65536 00:08:49.582 }, 00:08:49.582 { 00:08:49.582 "name": "BaseBdev2", 00:08:49.582 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:49.582 "is_configured": true, 00:08:49.582 "data_offset": 0, 00:08:49.582 "data_size": 65536 00:08:49.582 }, 00:08:49.582 { 00:08:49.582 "name": "BaseBdev3", 00:08:49.582 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:49.582 "is_configured": true, 00:08:49.582 "data_offset": 0, 00:08:49.582 "data_size": 65536 00:08:49.582 } 00:08:49.582 ] 00:08:49.582 }' 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.582 16:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.841 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.101 [2024-12-12 16:05:16.196184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.101 "name": "Existed_Raid", 00:08:50.101 "aliases": [ 00:08:50.101 "c5c6d51e-e9e4-44a4-aee2-f1aad44d5144" 00:08:50.101 ], 00:08:50.101 "product_name": "Raid Volume", 00:08:50.101 "block_size": 512, 00:08:50.101 "num_blocks": 196608, 00:08:50.101 "uuid": "c5c6d51e-e9e4-44a4-aee2-f1aad44d5144", 00:08:50.101 "assigned_rate_limits": { 00:08:50.101 "rw_ios_per_sec": 0, 00:08:50.101 "rw_mbytes_per_sec": 0, 00:08:50.101 "r_mbytes_per_sec": 0, 00:08:50.101 "w_mbytes_per_sec": 0 00:08:50.101 }, 00:08:50.101 "claimed": false, 00:08:50.101 "zoned": false, 00:08:50.101 "supported_io_types": { 00:08:50.101 "read": true, 00:08:50.101 "write": true, 00:08:50.101 "unmap": true, 00:08:50.101 "flush": true, 00:08:50.101 "reset": true, 00:08:50.101 "nvme_admin": false, 00:08:50.101 "nvme_io": false, 00:08:50.101 "nvme_io_md": false, 00:08:50.101 "write_zeroes": true, 00:08:50.101 "zcopy": false, 00:08:50.101 "get_zone_info": false, 00:08:50.101 "zone_management": false, 00:08:50.101 "zone_append": false, 00:08:50.101 "compare": false, 00:08:50.101 "compare_and_write": false, 00:08:50.101 "abort": false, 00:08:50.101 "seek_hole": false, 00:08:50.101 "seek_data": false, 00:08:50.101 "copy": false, 00:08:50.101 "nvme_iov_md": false 00:08:50.101 }, 00:08:50.101 "memory_domains": [ 00:08:50.101 { 00:08:50.101 "dma_device_id": "system", 00:08:50.101 "dma_device_type": 1 00:08:50.101 }, 00:08:50.101 { 00:08:50.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.101 "dma_device_type": 2 00:08:50.101 }, 00:08:50.101 { 00:08:50.101 "dma_device_id": "system", 00:08:50.101 "dma_device_type": 1 00:08:50.101 }, 00:08:50.101 { 00:08:50.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.101 "dma_device_type": 2 00:08:50.101 }, 00:08:50.101 { 00:08:50.101 "dma_device_id": "system", 00:08:50.101 "dma_device_type": 1 00:08:50.101 }, 00:08:50.101 { 00:08:50.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.101 "dma_device_type": 2 00:08:50.101 } 00:08:50.101 ], 00:08:50.101 "driver_specific": { 00:08:50.101 "raid": { 00:08:50.101 "uuid": "c5c6d51e-e9e4-44a4-aee2-f1aad44d5144", 00:08:50.101 "strip_size_kb": 64, 00:08:50.101 "state": "online", 00:08:50.101 "raid_level": "raid0", 00:08:50.101 "superblock": false, 00:08:50.101 "num_base_bdevs": 3, 00:08:50.101 "num_base_bdevs_discovered": 3, 00:08:50.101 "num_base_bdevs_operational": 3, 00:08:50.101 "base_bdevs_list": [ 00:08:50.101 { 00:08:50.101 "name": "NewBaseBdev", 00:08:50.101 "uuid": "23399e5e-2da6-4451-b44e-378df4dc2085", 00:08:50.101 "is_configured": true, 00:08:50.101 "data_offset": 0, 00:08:50.101 "data_size": 65536 00:08:50.101 }, 00:08:50.101 { 00:08:50.101 "name": "BaseBdev2", 00:08:50.101 "uuid": "832859cf-6a19-4636-a01e-fbe0cb9eba5d", 00:08:50.101 "is_configured": true, 00:08:50.101 "data_offset": 0, 00:08:50.101 "data_size": 65536 00:08:50.101 }, 00:08:50.101 { 00:08:50.101 "name": "BaseBdev3", 00:08:50.101 "uuid": "ba29acf8-7fb2-4ff1-a8d7-a84595ef5375", 00:08:50.101 "is_configured": true, 00:08:50.101 "data_offset": 0, 00:08:50.101 "data_size": 65536 00:08:50.101 } 00:08:50.101 ] 00:08:50.101 } 00:08:50.101 } 00:08:50.101 }' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:50.101 BaseBdev2 00:08:50.101 BaseBdev3' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.101 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.361 [2024-12-12 16:05:16.451593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.361 [2024-12-12 16:05:16.451650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.361 [2024-12-12 16:05:16.451771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.361 [2024-12-12 16:05:16.451852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.361 [2024-12-12 16:05:16.451867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65837 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65837 ']' 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65837 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65837 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65837' 00:08:50.361 killing process with pid 65837 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65837 00:08:50.361 [2024-12-12 16:05:16.492646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.361 16:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65837 00:08:50.620 [2024-12-12 16:05:16.845021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.999 ************************************ 00:08:51.999 END TEST raid_state_function_test 00:08:51.999 ************************************ 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:51.999 00:08:51.999 real 0m10.853s 00:08:51.999 user 0m16.974s 00:08:51.999 sys 0m1.847s 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.999 16:05:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:51.999 16:05:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.999 16:05:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.999 16:05:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.999 ************************************ 00:08:51.999 START TEST raid_state_function_test_sb 00:08:51.999 ************************************ 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66464 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66464' 00:08:51.999 Process raid pid: 66464 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66464 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66464 ']' 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.999 16:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.999 [2024-12-12 16:05:18.270861] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:51.999 [2024-12-12 16:05:18.271120] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.258 [2024-12-12 16:05:18.478759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.518 [2024-12-12 16:05:18.623897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.777 [2024-12-12 16:05:18.877067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.777 [2024-12-12 16:05:18.877216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.777 [2024-12-12 16:05:19.120427] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.777 [2024-12-12 16:05:19.120499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.777 [2024-12-12 16:05:19.120510] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.777 [2024-12-12 16:05:19.120521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.777 [2024-12-12 16:05:19.120528] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.777 [2024-12-12 16:05:19.120537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.777 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.778 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.778 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.778 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.778 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.778 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.037 "name": "Existed_Raid", 00:08:53.037 "uuid": "1c1ec879-2ec8-4d35-8944-3bff9229a950", 00:08:53.037 "strip_size_kb": 64, 00:08:53.037 "state": "configuring", 00:08:53.037 "raid_level": "raid0", 00:08:53.037 "superblock": true, 00:08:53.037 "num_base_bdevs": 3, 00:08:53.037 "num_base_bdevs_discovered": 0, 00:08:53.037 "num_base_bdevs_operational": 3, 00:08:53.037 "base_bdevs_list": [ 00:08:53.037 { 00:08:53.037 "name": "BaseBdev1", 00:08:53.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.037 "is_configured": false, 00:08:53.037 "data_offset": 0, 00:08:53.037 "data_size": 0 00:08:53.037 }, 00:08:53.037 { 00:08:53.037 "name": "BaseBdev2", 00:08:53.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.037 "is_configured": false, 00:08:53.037 "data_offset": 0, 00:08:53.037 "data_size": 0 00:08:53.037 }, 00:08:53.037 { 00:08:53.037 "name": "BaseBdev3", 00:08:53.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.037 "is_configured": false, 00:08:53.037 "data_offset": 0, 00:08:53.037 "data_size": 0 00:08:53.037 } 00:08:53.037 ] 00:08:53.037 }' 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.037 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.297 [2024-12-12 16:05:19.559690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.297 [2024-12-12 16:05:19.559838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.297 [2024-12-12 16:05:19.571640] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.297 [2024-12-12 16:05:19.571729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.297 [2024-12-12 16:05:19.571756] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.297 [2024-12-12 16:05:19.571779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.297 [2024-12-12 16:05:19.571796] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.297 [2024-12-12 16:05:19.571817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.297 [2024-12-12 16:05:19.626947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.297 BaseBdev1 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.297 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.561 [ 00:08:53.561 { 00:08:53.561 "name": "BaseBdev1", 00:08:53.561 "aliases": [ 00:08:53.561 "bf5fda96-46ed-447c-93d0-77ee0f081e92" 00:08:53.561 ], 00:08:53.561 "product_name": "Malloc disk", 00:08:53.561 "block_size": 512, 00:08:53.561 "num_blocks": 65536, 00:08:53.561 "uuid": "bf5fda96-46ed-447c-93d0-77ee0f081e92", 00:08:53.561 "assigned_rate_limits": { 00:08:53.561 "rw_ios_per_sec": 0, 00:08:53.561 "rw_mbytes_per_sec": 0, 00:08:53.561 "r_mbytes_per_sec": 0, 00:08:53.561 "w_mbytes_per_sec": 0 00:08:53.561 }, 00:08:53.561 "claimed": true, 00:08:53.561 "claim_type": "exclusive_write", 00:08:53.561 "zoned": false, 00:08:53.561 "supported_io_types": { 00:08:53.561 "read": true, 00:08:53.561 "write": true, 00:08:53.561 "unmap": true, 00:08:53.561 "flush": true, 00:08:53.561 "reset": true, 00:08:53.561 "nvme_admin": false, 00:08:53.561 "nvme_io": false, 00:08:53.561 "nvme_io_md": false, 00:08:53.561 "write_zeroes": true, 00:08:53.561 "zcopy": true, 00:08:53.561 "get_zone_info": false, 00:08:53.561 "zone_management": false, 00:08:53.561 "zone_append": false, 00:08:53.561 "compare": false, 00:08:53.561 "compare_and_write": false, 00:08:53.561 "abort": true, 00:08:53.561 "seek_hole": false, 00:08:53.561 "seek_data": false, 00:08:53.561 "copy": true, 00:08:53.561 "nvme_iov_md": false 00:08:53.561 }, 00:08:53.561 "memory_domains": [ 00:08:53.561 { 00:08:53.561 "dma_device_id": "system", 00:08:53.561 "dma_device_type": 1 00:08:53.561 }, 00:08:53.561 { 00:08:53.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.561 "dma_device_type": 2 00:08:53.561 } 00:08:53.561 ], 00:08:53.561 "driver_specific": {} 00:08:53.561 } 00:08:53.561 ] 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.561 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.561 "name": "Existed_Raid", 00:08:53.561 "uuid": "1f589865-5dae-409d-98e4-951ed5f71b89", 00:08:53.561 "strip_size_kb": 64, 00:08:53.561 "state": "configuring", 00:08:53.561 "raid_level": "raid0", 00:08:53.561 "superblock": true, 00:08:53.561 "num_base_bdevs": 3, 00:08:53.562 "num_base_bdevs_discovered": 1, 00:08:53.562 "num_base_bdevs_operational": 3, 00:08:53.562 "base_bdevs_list": [ 00:08:53.562 { 00:08:53.562 "name": "BaseBdev1", 00:08:53.562 "uuid": "bf5fda96-46ed-447c-93d0-77ee0f081e92", 00:08:53.562 "is_configured": true, 00:08:53.562 "data_offset": 2048, 00:08:53.562 "data_size": 63488 00:08:53.562 }, 00:08:53.562 { 00:08:53.562 "name": "BaseBdev2", 00:08:53.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.562 "is_configured": false, 00:08:53.562 "data_offset": 0, 00:08:53.562 "data_size": 0 00:08:53.562 }, 00:08:53.562 { 00:08:53.562 "name": "BaseBdev3", 00:08:53.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.562 "is_configured": false, 00:08:53.562 "data_offset": 0, 00:08:53.562 "data_size": 0 00:08:53.562 } 00:08:53.562 ] 00:08:53.562 }' 00:08:53.562 16:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.562 16:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.825 [2024-12-12 16:05:20.110199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.825 [2024-12-12 16:05:20.110377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.825 [2024-12-12 16:05:20.122197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.825 [2024-12-12 16:05:20.124411] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.825 [2024-12-12 16:05:20.124462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.825 [2024-12-12 16:05:20.124474] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.825 [2024-12-12 16:05:20.124485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.825 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.084 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.084 "name": "Existed_Raid", 00:08:54.084 "uuid": "5efe908e-7ed8-49ec-b0e9-fa07d505ba89", 00:08:54.084 "strip_size_kb": 64, 00:08:54.084 "state": "configuring", 00:08:54.084 "raid_level": "raid0", 00:08:54.084 "superblock": true, 00:08:54.084 "num_base_bdevs": 3, 00:08:54.084 "num_base_bdevs_discovered": 1, 00:08:54.084 "num_base_bdevs_operational": 3, 00:08:54.084 "base_bdevs_list": [ 00:08:54.084 { 00:08:54.084 "name": "BaseBdev1", 00:08:54.084 "uuid": "bf5fda96-46ed-447c-93d0-77ee0f081e92", 00:08:54.084 "is_configured": true, 00:08:54.084 "data_offset": 2048, 00:08:54.084 "data_size": 63488 00:08:54.084 }, 00:08:54.084 { 00:08:54.084 "name": "BaseBdev2", 00:08:54.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.084 "is_configured": false, 00:08:54.084 "data_offset": 0, 00:08:54.084 "data_size": 0 00:08:54.084 }, 00:08:54.084 { 00:08:54.084 "name": "BaseBdev3", 00:08:54.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.084 "is_configured": false, 00:08:54.084 "data_offset": 0, 00:08:54.084 "data_size": 0 00:08:54.084 } 00:08:54.084 ] 00:08:54.084 }' 00:08:54.084 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.084 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.344 [2024-12-12 16:05:20.541141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.344 BaseBdev2 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.344 [ 00:08:54.344 { 00:08:54.344 "name": "BaseBdev2", 00:08:54.344 "aliases": [ 00:08:54.344 "afe31cd3-5cab-4b00-987d-e6725fd49762" 00:08:54.344 ], 00:08:54.344 "product_name": "Malloc disk", 00:08:54.344 "block_size": 512, 00:08:54.344 "num_blocks": 65536, 00:08:54.344 "uuid": "afe31cd3-5cab-4b00-987d-e6725fd49762", 00:08:54.344 "assigned_rate_limits": { 00:08:54.344 "rw_ios_per_sec": 0, 00:08:54.344 "rw_mbytes_per_sec": 0, 00:08:54.344 "r_mbytes_per_sec": 0, 00:08:54.344 "w_mbytes_per_sec": 0 00:08:54.344 }, 00:08:54.344 "claimed": true, 00:08:54.344 "claim_type": "exclusive_write", 00:08:54.344 "zoned": false, 00:08:54.344 "supported_io_types": { 00:08:54.344 "read": true, 00:08:54.344 "write": true, 00:08:54.344 "unmap": true, 00:08:54.344 "flush": true, 00:08:54.344 "reset": true, 00:08:54.344 "nvme_admin": false, 00:08:54.344 "nvme_io": false, 00:08:54.344 "nvme_io_md": false, 00:08:54.344 "write_zeroes": true, 00:08:54.344 "zcopy": true, 00:08:54.344 "get_zone_info": false, 00:08:54.344 "zone_management": false, 00:08:54.344 "zone_append": false, 00:08:54.344 "compare": false, 00:08:54.344 "compare_and_write": false, 00:08:54.344 "abort": true, 00:08:54.344 "seek_hole": false, 00:08:54.344 "seek_data": false, 00:08:54.344 "copy": true, 00:08:54.344 "nvme_iov_md": false 00:08:54.344 }, 00:08:54.344 "memory_domains": [ 00:08:54.344 { 00:08:54.344 "dma_device_id": "system", 00:08:54.344 "dma_device_type": 1 00:08:54.344 }, 00:08:54.344 { 00:08:54.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.344 "dma_device_type": 2 00:08:54.344 } 00:08:54.344 ], 00:08:54.344 "driver_specific": {} 00:08:54.344 } 00:08:54.344 ] 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.344 "name": "Existed_Raid", 00:08:54.344 "uuid": "5efe908e-7ed8-49ec-b0e9-fa07d505ba89", 00:08:54.344 "strip_size_kb": 64, 00:08:54.344 "state": "configuring", 00:08:54.344 "raid_level": "raid0", 00:08:54.344 "superblock": true, 00:08:54.344 "num_base_bdevs": 3, 00:08:54.344 "num_base_bdevs_discovered": 2, 00:08:54.344 "num_base_bdevs_operational": 3, 00:08:54.344 "base_bdevs_list": [ 00:08:54.344 { 00:08:54.344 "name": "BaseBdev1", 00:08:54.344 "uuid": "bf5fda96-46ed-447c-93d0-77ee0f081e92", 00:08:54.344 "is_configured": true, 00:08:54.344 "data_offset": 2048, 00:08:54.344 "data_size": 63488 00:08:54.344 }, 00:08:54.344 { 00:08:54.344 "name": "BaseBdev2", 00:08:54.344 "uuid": "afe31cd3-5cab-4b00-987d-e6725fd49762", 00:08:54.344 "is_configured": true, 00:08:54.344 "data_offset": 2048, 00:08:54.344 "data_size": 63488 00:08:54.344 }, 00:08:54.344 { 00:08:54.344 "name": "BaseBdev3", 00:08:54.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.344 "is_configured": false, 00:08:54.344 "data_offset": 0, 00:08:54.344 "data_size": 0 00:08:54.344 } 00:08:54.344 ] 00:08:54.344 }' 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.344 16:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.914 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.914 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.914 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.914 [2024-12-12 16:05:21.081401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.914 [2024-12-12 16:05:21.081715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.914 [2024-12-12 16:05:21.081740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.914 [2024-12-12 16:05:21.082081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:54.914 BaseBdev3 00:08:54.914 [2024-12-12 16:05:21.082268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.914 [2024-12-12 16:05:21.082287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:54.914 [2024-12-12 16:05:21.082465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.914 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.914 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.915 [ 00:08:54.915 { 00:08:54.915 "name": "BaseBdev3", 00:08:54.915 "aliases": [ 00:08:54.915 "2ae5954e-94ab-498c-8155-f04fbd4259ba" 00:08:54.915 ], 00:08:54.915 "product_name": "Malloc disk", 00:08:54.915 "block_size": 512, 00:08:54.915 "num_blocks": 65536, 00:08:54.915 "uuid": "2ae5954e-94ab-498c-8155-f04fbd4259ba", 00:08:54.915 "assigned_rate_limits": { 00:08:54.915 "rw_ios_per_sec": 0, 00:08:54.915 "rw_mbytes_per_sec": 0, 00:08:54.915 "r_mbytes_per_sec": 0, 00:08:54.915 "w_mbytes_per_sec": 0 00:08:54.915 }, 00:08:54.915 "claimed": true, 00:08:54.915 "claim_type": "exclusive_write", 00:08:54.915 "zoned": false, 00:08:54.915 "supported_io_types": { 00:08:54.915 "read": true, 00:08:54.915 "write": true, 00:08:54.915 "unmap": true, 00:08:54.915 "flush": true, 00:08:54.915 "reset": true, 00:08:54.915 "nvme_admin": false, 00:08:54.915 "nvme_io": false, 00:08:54.915 "nvme_io_md": false, 00:08:54.915 "write_zeroes": true, 00:08:54.915 "zcopy": true, 00:08:54.915 "get_zone_info": false, 00:08:54.915 "zone_management": false, 00:08:54.915 "zone_append": false, 00:08:54.915 "compare": false, 00:08:54.915 "compare_and_write": false, 00:08:54.915 "abort": true, 00:08:54.915 "seek_hole": false, 00:08:54.915 "seek_data": false, 00:08:54.915 "copy": true, 00:08:54.915 "nvme_iov_md": false 00:08:54.915 }, 00:08:54.915 "memory_domains": [ 00:08:54.915 { 00:08:54.915 "dma_device_id": "system", 00:08:54.915 "dma_device_type": 1 00:08:54.915 }, 00:08:54.915 { 00:08:54.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.915 "dma_device_type": 2 00:08:54.915 } 00:08:54.915 ], 00:08:54.915 "driver_specific": {} 00:08:54.915 } 00:08:54.915 ] 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.915 "name": "Existed_Raid", 00:08:54.915 "uuid": "5efe908e-7ed8-49ec-b0e9-fa07d505ba89", 00:08:54.915 "strip_size_kb": 64, 00:08:54.915 "state": "online", 00:08:54.915 "raid_level": "raid0", 00:08:54.915 "superblock": true, 00:08:54.915 "num_base_bdevs": 3, 00:08:54.915 "num_base_bdevs_discovered": 3, 00:08:54.915 "num_base_bdevs_operational": 3, 00:08:54.915 "base_bdevs_list": [ 00:08:54.915 { 00:08:54.915 "name": "BaseBdev1", 00:08:54.915 "uuid": "bf5fda96-46ed-447c-93d0-77ee0f081e92", 00:08:54.915 "is_configured": true, 00:08:54.915 "data_offset": 2048, 00:08:54.915 "data_size": 63488 00:08:54.915 }, 00:08:54.915 { 00:08:54.915 "name": "BaseBdev2", 00:08:54.915 "uuid": "afe31cd3-5cab-4b00-987d-e6725fd49762", 00:08:54.915 "is_configured": true, 00:08:54.915 "data_offset": 2048, 00:08:54.915 "data_size": 63488 00:08:54.915 }, 00:08:54.915 { 00:08:54.915 "name": "BaseBdev3", 00:08:54.915 "uuid": "2ae5954e-94ab-498c-8155-f04fbd4259ba", 00:08:54.915 "is_configured": true, 00:08:54.915 "data_offset": 2048, 00:08:54.915 "data_size": 63488 00:08:54.915 } 00:08:54.915 ] 00:08:54.915 }' 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.915 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.485 [2024-12-12 16:05:21.561081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.485 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.485 "name": "Existed_Raid", 00:08:55.485 "aliases": [ 00:08:55.485 "5efe908e-7ed8-49ec-b0e9-fa07d505ba89" 00:08:55.485 ], 00:08:55.485 "product_name": "Raid Volume", 00:08:55.485 "block_size": 512, 00:08:55.485 "num_blocks": 190464, 00:08:55.485 "uuid": "5efe908e-7ed8-49ec-b0e9-fa07d505ba89", 00:08:55.485 "assigned_rate_limits": { 00:08:55.485 "rw_ios_per_sec": 0, 00:08:55.485 "rw_mbytes_per_sec": 0, 00:08:55.485 "r_mbytes_per_sec": 0, 00:08:55.485 "w_mbytes_per_sec": 0 00:08:55.485 }, 00:08:55.485 "claimed": false, 00:08:55.485 "zoned": false, 00:08:55.485 "supported_io_types": { 00:08:55.485 "read": true, 00:08:55.485 "write": true, 00:08:55.485 "unmap": true, 00:08:55.485 "flush": true, 00:08:55.485 "reset": true, 00:08:55.485 "nvme_admin": false, 00:08:55.485 "nvme_io": false, 00:08:55.485 "nvme_io_md": false, 00:08:55.485 "write_zeroes": true, 00:08:55.485 "zcopy": false, 00:08:55.485 "get_zone_info": false, 00:08:55.485 "zone_management": false, 00:08:55.485 "zone_append": false, 00:08:55.485 "compare": false, 00:08:55.485 "compare_and_write": false, 00:08:55.485 "abort": false, 00:08:55.485 "seek_hole": false, 00:08:55.485 "seek_data": false, 00:08:55.485 "copy": false, 00:08:55.485 "nvme_iov_md": false 00:08:55.485 }, 00:08:55.485 "memory_domains": [ 00:08:55.485 { 00:08:55.485 "dma_device_id": "system", 00:08:55.485 "dma_device_type": 1 00:08:55.485 }, 00:08:55.485 { 00:08:55.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.485 "dma_device_type": 2 00:08:55.485 }, 00:08:55.485 { 00:08:55.485 "dma_device_id": "system", 00:08:55.485 "dma_device_type": 1 00:08:55.485 }, 00:08:55.485 { 00:08:55.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.485 "dma_device_type": 2 00:08:55.485 }, 00:08:55.485 { 00:08:55.485 "dma_device_id": "system", 00:08:55.485 "dma_device_type": 1 00:08:55.485 }, 00:08:55.485 { 00:08:55.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.485 "dma_device_type": 2 00:08:55.485 } 00:08:55.485 ], 00:08:55.485 "driver_specific": { 00:08:55.485 "raid": { 00:08:55.485 "uuid": "5efe908e-7ed8-49ec-b0e9-fa07d505ba89", 00:08:55.485 "strip_size_kb": 64, 00:08:55.485 "state": "online", 00:08:55.485 "raid_level": "raid0", 00:08:55.485 "superblock": true, 00:08:55.485 "num_base_bdevs": 3, 00:08:55.485 "num_base_bdevs_discovered": 3, 00:08:55.485 "num_base_bdevs_operational": 3, 00:08:55.485 "base_bdevs_list": [ 00:08:55.485 { 00:08:55.485 "name": "BaseBdev1", 00:08:55.485 "uuid": "bf5fda96-46ed-447c-93d0-77ee0f081e92", 00:08:55.485 "is_configured": true, 00:08:55.485 "data_offset": 2048, 00:08:55.485 "data_size": 63488 00:08:55.485 }, 00:08:55.485 { 00:08:55.485 "name": "BaseBdev2", 00:08:55.485 "uuid": "afe31cd3-5cab-4b00-987d-e6725fd49762", 00:08:55.485 "is_configured": true, 00:08:55.486 "data_offset": 2048, 00:08:55.486 "data_size": 63488 00:08:55.486 }, 00:08:55.486 { 00:08:55.486 "name": "BaseBdev3", 00:08:55.486 "uuid": "2ae5954e-94ab-498c-8155-f04fbd4259ba", 00:08:55.486 "is_configured": true, 00:08:55.486 "data_offset": 2048, 00:08:55.486 "data_size": 63488 00:08:55.486 } 00:08:55.486 ] 00:08:55.486 } 00:08:55.486 } 00:08:55.486 }' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.486 BaseBdev2 00:08:55.486 BaseBdev3' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.486 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.486 [2024-12-12 16:05:21.828291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.486 [2024-12-12 16:05:21.828337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.486 [2024-12-12 16:05:21.828402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.746 "name": "Existed_Raid", 00:08:55.746 "uuid": "5efe908e-7ed8-49ec-b0e9-fa07d505ba89", 00:08:55.746 "strip_size_kb": 64, 00:08:55.746 "state": "offline", 00:08:55.746 "raid_level": "raid0", 00:08:55.746 "superblock": true, 00:08:55.746 "num_base_bdevs": 3, 00:08:55.746 "num_base_bdevs_discovered": 2, 00:08:55.746 "num_base_bdevs_operational": 2, 00:08:55.746 "base_bdevs_list": [ 00:08:55.746 { 00:08:55.746 "name": null, 00:08:55.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.746 "is_configured": false, 00:08:55.746 "data_offset": 0, 00:08:55.746 "data_size": 63488 00:08:55.746 }, 00:08:55.746 { 00:08:55.746 "name": "BaseBdev2", 00:08:55.746 "uuid": "afe31cd3-5cab-4b00-987d-e6725fd49762", 00:08:55.746 "is_configured": true, 00:08:55.746 "data_offset": 2048, 00:08:55.746 "data_size": 63488 00:08:55.746 }, 00:08:55.746 { 00:08:55.746 "name": "BaseBdev3", 00:08:55.746 "uuid": "2ae5954e-94ab-498c-8155-f04fbd4259ba", 00:08:55.746 "is_configured": true, 00:08:55.746 "data_offset": 2048, 00:08:55.746 "data_size": 63488 00:08:55.746 } 00:08:55.746 ] 00:08:55.746 }' 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.746 16:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.316 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.317 [2024-12-12 16:05:22.415500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.317 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.317 [2024-12-12 16:05:22.581045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.317 [2024-12-12 16:05:22.581114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.577 BaseBdev2 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.577 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.577 [ 00:08:56.577 { 00:08:56.577 "name": "BaseBdev2", 00:08:56.577 "aliases": [ 00:08:56.577 "06a5a6e9-3adf-464b-98c0-b81b00195b39" 00:08:56.577 ], 00:08:56.577 "product_name": "Malloc disk", 00:08:56.577 "block_size": 512, 00:08:56.577 "num_blocks": 65536, 00:08:56.577 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:56.577 "assigned_rate_limits": { 00:08:56.577 "rw_ios_per_sec": 0, 00:08:56.577 "rw_mbytes_per_sec": 0, 00:08:56.577 "r_mbytes_per_sec": 0, 00:08:56.577 "w_mbytes_per_sec": 0 00:08:56.577 }, 00:08:56.577 "claimed": false, 00:08:56.577 "zoned": false, 00:08:56.577 "supported_io_types": { 00:08:56.577 "read": true, 00:08:56.577 "write": true, 00:08:56.577 "unmap": true, 00:08:56.577 "flush": true, 00:08:56.577 "reset": true, 00:08:56.577 "nvme_admin": false, 00:08:56.577 "nvme_io": false, 00:08:56.577 "nvme_io_md": false, 00:08:56.577 "write_zeroes": true, 00:08:56.577 "zcopy": true, 00:08:56.577 "get_zone_info": false, 00:08:56.577 "zone_management": false, 00:08:56.577 "zone_append": false, 00:08:56.577 "compare": false, 00:08:56.577 "compare_and_write": false, 00:08:56.577 "abort": true, 00:08:56.577 "seek_hole": false, 00:08:56.577 "seek_data": false, 00:08:56.577 "copy": true, 00:08:56.577 "nvme_iov_md": false 00:08:56.577 }, 00:08:56.577 "memory_domains": [ 00:08:56.577 { 00:08:56.577 "dma_device_id": "system", 00:08:56.577 "dma_device_type": 1 00:08:56.577 }, 00:08:56.577 { 00:08:56.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.577 "dma_device_type": 2 00:08:56.577 } 00:08:56.577 ], 00:08:56.578 "driver_specific": {} 00:08:56.578 } 00:08:56.578 ] 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.578 BaseBdev3 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.578 [ 00:08:56.578 { 00:08:56.578 "name": "BaseBdev3", 00:08:56.578 "aliases": [ 00:08:56.578 "58acd982-d463-4efd-ae75-6b936bfa3ffe" 00:08:56.578 ], 00:08:56.578 "product_name": "Malloc disk", 00:08:56.578 "block_size": 512, 00:08:56.578 "num_blocks": 65536, 00:08:56.578 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:56.578 "assigned_rate_limits": { 00:08:56.578 "rw_ios_per_sec": 0, 00:08:56.578 "rw_mbytes_per_sec": 0, 00:08:56.578 "r_mbytes_per_sec": 0, 00:08:56.578 "w_mbytes_per_sec": 0 00:08:56.578 }, 00:08:56.578 "claimed": false, 00:08:56.578 "zoned": false, 00:08:56.578 "supported_io_types": { 00:08:56.578 "read": true, 00:08:56.578 "write": true, 00:08:56.578 "unmap": true, 00:08:56.578 "flush": true, 00:08:56.578 "reset": true, 00:08:56.578 "nvme_admin": false, 00:08:56.578 "nvme_io": false, 00:08:56.578 "nvme_io_md": false, 00:08:56.578 "write_zeroes": true, 00:08:56.578 "zcopy": true, 00:08:56.578 "get_zone_info": false, 00:08:56.578 "zone_management": false, 00:08:56.578 "zone_append": false, 00:08:56.578 "compare": false, 00:08:56.578 "compare_and_write": false, 00:08:56.578 "abort": true, 00:08:56.578 "seek_hole": false, 00:08:56.578 "seek_data": false, 00:08:56.578 "copy": true, 00:08:56.578 "nvme_iov_md": false 00:08:56.578 }, 00:08:56.578 "memory_domains": [ 00:08:56.578 { 00:08:56.578 "dma_device_id": "system", 00:08:56.578 "dma_device_type": 1 00:08:56.578 }, 00:08:56.578 { 00:08:56.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.578 "dma_device_type": 2 00:08:56.578 } 00:08:56.578 ], 00:08:56.578 "driver_specific": {} 00:08:56.578 } 00:08:56.578 ] 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.578 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.838 [2024-12-12 16:05:22.932641] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.838 [2024-12-12 16:05:22.932800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.838 [2024-12-12 16:05:22.932862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.838 [2024-12-12 16:05:22.935081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.838 "name": "Existed_Raid", 00:08:56.838 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:08:56.838 "strip_size_kb": 64, 00:08:56.838 "state": "configuring", 00:08:56.838 "raid_level": "raid0", 00:08:56.838 "superblock": true, 00:08:56.838 "num_base_bdevs": 3, 00:08:56.838 "num_base_bdevs_discovered": 2, 00:08:56.838 "num_base_bdevs_operational": 3, 00:08:56.838 "base_bdevs_list": [ 00:08:56.838 { 00:08:56.838 "name": "BaseBdev1", 00:08:56.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.838 "is_configured": false, 00:08:56.838 "data_offset": 0, 00:08:56.838 "data_size": 0 00:08:56.838 }, 00:08:56.838 { 00:08:56.838 "name": "BaseBdev2", 00:08:56.838 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:56.838 "is_configured": true, 00:08:56.838 "data_offset": 2048, 00:08:56.838 "data_size": 63488 00:08:56.838 }, 00:08:56.838 { 00:08:56.838 "name": "BaseBdev3", 00:08:56.838 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:56.838 "is_configured": true, 00:08:56.838 "data_offset": 2048, 00:08:56.838 "data_size": 63488 00:08:56.838 } 00:08:56.838 ] 00:08:56.838 }' 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.838 16:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.098 [2024-12-12 16:05:23.348010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.098 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.098 "name": "Existed_Raid", 00:08:57.098 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:08:57.098 "strip_size_kb": 64, 00:08:57.098 "state": "configuring", 00:08:57.098 "raid_level": "raid0", 00:08:57.098 "superblock": true, 00:08:57.099 "num_base_bdevs": 3, 00:08:57.099 "num_base_bdevs_discovered": 1, 00:08:57.099 "num_base_bdevs_operational": 3, 00:08:57.099 "base_bdevs_list": [ 00:08:57.099 { 00:08:57.099 "name": "BaseBdev1", 00:08:57.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.099 "is_configured": false, 00:08:57.099 "data_offset": 0, 00:08:57.099 "data_size": 0 00:08:57.099 }, 00:08:57.099 { 00:08:57.099 "name": null, 00:08:57.099 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:57.099 "is_configured": false, 00:08:57.099 "data_offset": 0, 00:08:57.099 "data_size": 63488 00:08:57.099 }, 00:08:57.099 { 00:08:57.099 "name": "BaseBdev3", 00:08:57.099 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:57.099 "is_configured": true, 00:08:57.099 "data_offset": 2048, 00:08:57.099 "data_size": 63488 00:08:57.099 } 00:08:57.099 ] 00:08:57.099 }' 00:08:57.099 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.099 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.668 [2024-12-12 16:05:23.840321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.668 BaseBdev1 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.668 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.668 [ 00:08:57.668 { 00:08:57.668 "name": "BaseBdev1", 00:08:57.668 "aliases": [ 00:08:57.668 "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a" 00:08:57.668 ], 00:08:57.668 "product_name": "Malloc disk", 00:08:57.668 "block_size": 512, 00:08:57.668 "num_blocks": 65536, 00:08:57.668 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:08:57.668 "assigned_rate_limits": { 00:08:57.668 "rw_ios_per_sec": 0, 00:08:57.668 "rw_mbytes_per_sec": 0, 00:08:57.668 "r_mbytes_per_sec": 0, 00:08:57.668 "w_mbytes_per_sec": 0 00:08:57.668 }, 00:08:57.668 "claimed": true, 00:08:57.668 "claim_type": "exclusive_write", 00:08:57.668 "zoned": false, 00:08:57.668 "supported_io_types": { 00:08:57.668 "read": true, 00:08:57.668 "write": true, 00:08:57.668 "unmap": true, 00:08:57.668 "flush": true, 00:08:57.668 "reset": true, 00:08:57.668 "nvme_admin": false, 00:08:57.668 "nvme_io": false, 00:08:57.668 "nvme_io_md": false, 00:08:57.668 "write_zeroes": true, 00:08:57.668 "zcopy": true, 00:08:57.668 "get_zone_info": false, 00:08:57.668 "zone_management": false, 00:08:57.669 "zone_append": false, 00:08:57.669 "compare": false, 00:08:57.669 "compare_and_write": false, 00:08:57.669 "abort": true, 00:08:57.669 "seek_hole": false, 00:08:57.669 "seek_data": false, 00:08:57.669 "copy": true, 00:08:57.669 "nvme_iov_md": false 00:08:57.669 }, 00:08:57.669 "memory_domains": [ 00:08:57.669 { 00:08:57.669 "dma_device_id": "system", 00:08:57.669 "dma_device_type": 1 00:08:57.669 }, 00:08:57.669 { 00:08:57.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.669 "dma_device_type": 2 00:08:57.669 } 00:08:57.669 ], 00:08:57.669 "driver_specific": {} 00:08:57.669 } 00:08:57.669 ] 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.669 "name": "Existed_Raid", 00:08:57.669 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:08:57.669 "strip_size_kb": 64, 00:08:57.669 "state": "configuring", 00:08:57.669 "raid_level": "raid0", 00:08:57.669 "superblock": true, 00:08:57.669 "num_base_bdevs": 3, 00:08:57.669 "num_base_bdevs_discovered": 2, 00:08:57.669 "num_base_bdevs_operational": 3, 00:08:57.669 "base_bdevs_list": [ 00:08:57.669 { 00:08:57.669 "name": "BaseBdev1", 00:08:57.669 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:08:57.669 "is_configured": true, 00:08:57.669 "data_offset": 2048, 00:08:57.669 "data_size": 63488 00:08:57.669 }, 00:08:57.669 { 00:08:57.669 "name": null, 00:08:57.669 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:57.669 "is_configured": false, 00:08:57.669 "data_offset": 0, 00:08:57.669 "data_size": 63488 00:08:57.669 }, 00:08:57.669 { 00:08:57.669 "name": "BaseBdev3", 00:08:57.669 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:57.669 "is_configured": true, 00:08:57.669 "data_offset": 2048, 00:08:57.669 "data_size": 63488 00:08:57.669 } 00:08:57.669 ] 00:08:57.669 }' 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.669 16:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.928 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.929 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.929 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.187 [2024-12-12 16:05:24.323673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.187 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.187 "name": "Existed_Raid", 00:08:58.187 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:08:58.187 "strip_size_kb": 64, 00:08:58.187 "state": "configuring", 00:08:58.187 "raid_level": "raid0", 00:08:58.187 "superblock": true, 00:08:58.187 "num_base_bdevs": 3, 00:08:58.187 "num_base_bdevs_discovered": 1, 00:08:58.187 "num_base_bdevs_operational": 3, 00:08:58.187 "base_bdevs_list": [ 00:08:58.187 { 00:08:58.187 "name": "BaseBdev1", 00:08:58.187 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:08:58.187 "is_configured": true, 00:08:58.187 "data_offset": 2048, 00:08:58.187 "data_size": 63488 00:08:58.187 }, 00:08:58.187 { 00:08:58.187 "name": null, 00:08:58.187 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:58.187 "is_configured": false, 00:08:58.187 "data_offset": 0, 00:08:58.187 "data_size": 63488 00:08:58.187 }, 00:08:58.187 { 00:08:58.187 "name": null, 00:08:58.187 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:58.187 "is_configured": false, 00:08:58.187 "data_offset": 0, 00:08:58.187 "data_size": 63488 00:08:58.187 } 00:08:58.187 ] 00:08:58.187 }' 00:08:58.188 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.188 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.447 [2024-12-12 16:05:24.786976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.447 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.707 "name": "Existed_Raid", 00:08:58.707 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:08:58.707 "strip_size_kb": 64, 00:08:58.707 "state": "configuring", 00:08:58.707 "raid_level": "raid0", 00:08:58.707 "superblock": true, 00:08:58.707 "num_base_bdevs": 3, 00:08:58.707 "num_base_bdevs_discovered": 2, 00:08:58.707 "num_base_bdevs_operational": 3, 00:08:58.707 "base_bdevs_list": [ 00:08:58.707 { 00:08:58.707 "name": "BaseBdev1", 00:08:58.707 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:08:58.707 "is_configured": true, 00:08:58.707 "data_offset": 2048, 00:08:58.707 "data_size": 63488 00:08:58.707 }, 00:08:58.707 { 00:08:58.707 "name": null, 00:08:58.707 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:58.707 "is_configured": false, 00:08:58.707 "data_offset": 0, 00:08:58.707 "data_size": 63488 00:08:58.707 }, 00:08:58.707 { 00:08:58.707 "name": "BaseBdev3", 00:08:58.707 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:58.707 "is_configured": true, 00:08:58.707 "data_offset": 2048, 00:08:58.707 "data_size": 63488 00:08:58.707 } 00:08:58.707 ] 00:08:58.707 }' 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.707 16:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.966 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.966 [2024-12-12 16:05:25.282067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.227 "name": "Existed_Raid", 00:08:59.227 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:08:59.227 "strip_size_kb": 64, 00:08:59.227 "state": "configuring", 00:08:59.227 "raid_level": "raid0", 00:08:59.227 "superblock": true, 00:08:59.227 "num_base_bdevs": 3, 00:08:59.227 "num_base_bdevs_discovered": 1, 00:08:59.227 "num_base_bdevs_operational": 3, 00:08:59.227 "base_bdevs_list": [ 00:08:59.227 { 00:08:59.227 "name": null, 00:08:59.227 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:08:59.227 "is_configured": false, 00:08:59.227 "data_offset": 0, 00:08:59.227 "data_size": 63488 00:08:59.227 }, 00:08:59.227 { 00:08:59.227 "name": null, 00:08:59.227 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:59.227 "is_configured": false, 00:08:59.227 "data_offset": 0, 00:08:59.227 "data_size": 63488 00:08:59.227 }, 00:08:59.227 { 00:08:59.227 "name": "BaseBdev3", 00:08:59.227 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:59.227 "is_configured": true, 00:08:59.227 "data_offset": 2048, 00:08:59.227 "data_size": 63488 00:08:59.227 } 00:08:59.227 ] 00:08:59.227 }' 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.227 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.487 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.487 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.487 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.487 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.745 [2024-12-12 16:05:25.882328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.745 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.746 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.746 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.746 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.746 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.746 "name": "Existed_Raid", 00:08:59.746 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:08:59.746 "strip_size_kb": 64, 00:08:59.746 "state": "configuring", 00:08:59.746 "raid_level": "raid0", 00:08:59.746 "superblock": true, 00:08:59.746 "num_base_bdevs": 3, 00:08:59.746 "num_base_bdevs_discovered": 2, 00:08:59.746 "num_base_bdevs_operational": 3, 00:08:59.746 "base_bdevs_list": [ 00:08:59.746 { 00:08:59.746 "name": null, 00:08:59.746 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:08:59.746 "is_configured": false, 00:08:59.746 "data_offset": 0, 00:08:59.746 "data_size": 63488 00:08:59.746 }, 00:08:59.746 { 00:08:59.746 "name": "BaseBdev2", 00:08:59.746 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:08:59.746 "is_configured": true, 00:08:59.746 "data_offset": 2048, 00:08:59.746 "data_size": 63488 00:08:59.746 }, 00:08:59.746 { 00:08:59.746 "name": "BaseBdev3", 00:08:59.746 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:08:59.746 "is_configured": true, 00:08:59.746 "data_offset": 2048, 00:08:59.746 "data_size": 63488 00:08:59.746 } 00:08:59.746 ] 00:08:59.746 }' 00:08:59.746 16:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.746 16:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.005 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:00.265 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.265 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a 00:09:00.265 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.265 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.265 [2024-12-12 16:05:26.432429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:00.265 [2024-12-12 16:05:26.432770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:00.265 [2024-12-12 16:05:26.432823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.265 [2024-12-12 16:05:26.433167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:00.266 [2024-12-12 16:05:26.433366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:00.266 [2024-12-12 16:05:26.433408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:00.266 NewBaseBdev 00:09:00.266 [2024-12-12 16:05:26.433597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.266 [ 00:09:00.266 { 00:09:00.266 "name": "NewBaseBdev", 00:09:00.266 "aliases": [ 00:09:00.266 "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a" 00:09:00.266 ], 00:09:00.266 "product_name": "Malloc disk", 00:09:00.266 "block_size": 512, 00:09:00.266 "num_blocks": 65536, 00:09:00.266 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:09:00.266 "assigned_rate_limits": { 00:09:00.266 "rw_ios_per_sec": 0, 00:09:00.266 "rw_mbytes_per_sec": 0, 00:09:00.266 "r_mbytes_per_sec": 0, 00:09:00.266 "w_mbytes_per_sec": 0 00:09:00.266 }, 00:09:00.266 "claimed": true, 00:09:00.266 "claim_type": "exclusive_write", 00:09:00.266 "zoned": false, 00:09:00.266 "supported_io_types": { 00:09:00.266 "read": true, 00:09:00.266 "write": true, 00:09:00.266 "unmap": true, 00:09:00.266 "flush": true, 00:09:00.266 "reset": true, 00:09:00.266 "nvme_admin": false, 00:09:00.266 "nvme_io": false, 00:09:00.266 "nvme_io_md": false, 00:09:00.266 "write_zeroes": true, 00:09:00.266 "zcopy": true, 00:09:00.266 "get_zone_info": false, 00:09:00.266 "zone_management": false, 00:09:00.266 "zone_append": false, 00:09:00.266 "compare": false, 00:09:00.266 "compare_and_write": false, 00:09:00.266 "abort": true, 00:09:00.266 "seek_hole": false, 00:09:00.266 "seek_data": false, 00:09:00.266 "copy": true, 00:09:00.266 "nvme_iov_md": false 00:09:00.266 }, 00:09:00.266 "memory_domains": [ 00:09:00.266 { 00:09:00.266 "dma_device_id": "system", 00:09:00.266 "dma_device_type": 1 00:09:00.266 }, 00:09:00.266 { 00:09:00.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.266 "dma_device_type": 2 00:09:00.266 } 00:09:00.266 ], 00:09:00.266 "driver_specific": {} 00:09:00.266 } 00:09:00.266 ] 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.266 "name": "Existed_Raid", 00:09:00.266 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:09:00.266 "strip_size_kb": 64, 00:09:00.266 "state": "online", 00:09:00.266 "raid_level": "raid0", 00:09:00.266 "superblock": true, 00:09:00.266 "num_base_bdevs": 3, 00:09:00.266 "num_base_bdevs_discovered": 3, 00:09:00.266 "num_base_bdevs_operational": 3, 00:09:00.266 "base_bdevs_list": [ 00:09:00.266 { 00:09:00.266 "name": "NewBaseBdev", 00:09:00.266 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:09:00.266 "is_configured": true, 00:09:00.266 "data_offset": 2048, 00:09:00.266 "data_size": 63488 00:09:00.266 }, 00:09:00.266 { 00:09:00.266 "name": "BaseBdev2", 00:09:00.266 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:09:00.266 "is_configured": true, 00:09:00.266 "data_offset": 2048, 00:09:00.266 "data_size": 63488 00:09:00.266 }, 00:09:00.266 { 00:09:00.266 "name": "BaseBdev3", 00:09:00.266 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:09:00.266 "is_configured": true, 00:09:00.266 "data_offset": 2048, 00:09:00.266 "data_size": 63488 00:09:00.266 } 00:09:00.266 ] 00:09:00.266 }' 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.266 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.836 [2024-12-12 16:05:26.932083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.836 "name": "Existed_Raid", 00:09:00.836 "aliases": [ 00:09:00.836 "595685e6-f8bd-42a3-95f5-0ba9755bc2cd" 00:09:00.836 ], 00:09:00.836 "product_name": "Raid Volume", 00:09:00.836 "block_size": 512, 00:09:00.836 "num_blocks": 190464, 00:09:00.836 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:09:00.836 "assigned_rate_limits": { 00:09:00.836 "rw_ios_per_sec": 0, 00:09:00.836 "rw_mbytes_per_sec": 0, 00:09:00.836 "r_mbytes_per_sec": 0, 00:09:00.836 "w_mbytes_per_sec": 0 00:09:00.836 }, 00:09:00.836 "claimed": false, 00:09:00.836 "zoned": false, 00:09:00.836 "supported_io_types": { 00:09:00.836 "read": true, 00:09:00.836 "write": true, 00:09:00.836 "unmap": true, 00:09:00.836 "flush": true, 00:09:00.836 "reset": true, 00:09:00.836 "nvme_admin": false, 00:09:00.836 "nvme_io": false, 00:09:00.836 "nvme_io_md": false, 00:09:00.836 "write_zeroes": true, 00:09:00.836 "zcopy": false, 00:09:00.836 "get_zone_info": false, 00:09:00.836 "zone_management": false, 00:09:00.836 "zone_append": false, 00:09:00.836 "compare": false, 00:09:00.836 "compare_and_write": false, 00:09:00.836 "abort": false, 00:09:00.836 "seek_hole": false, 00:09:00.836 "seek_data": false, 00:09:00.836 "copy": false, 00:09:00.836 "nvme_iov_md": false 00:09:00.836 }, 00:09:00.836 "memory_domains": [ 00:09:00.836 { 00:09:00.836 "dma_device_id": "system", 00:09:00.836 "dma_device_type": 1 00:09:00.836 }, 00:09:00.836 { 00:09:00.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.836 "dma_device_type": 2 00:09:00.836 }, 00:09:00.836 { 00:09:00.836 "dma_device_id": "system", 00:09:00.836 "dma_device_type": 1 00:09:00.836 }, 00:09:00.836 { 00:09:00.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.836 "dma_device_type": 2 00:09:00.836 }, 00:09:00.836 { 00:09:00.836 "dma_device_id": "system", 00:09:00.836 "dma_device_type": 1 00:09:00.836 }, 00:09:00.836 { 00:09:00.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.836 "dma_device_type": 2 00:09:00.836 } 00:09:00.836 ], 00:09:00.836 "driver_specific": { 00:09:00.836 "raid": { 00:09:00.836 "uuid": "595685e6-f8bd-42a3-95f5-0ba9755bc2cd", 00:09:00.836 "strip_size_kb": 64, 00:09:00.836 "state": "online", 00:09:00.836 "raid_level": "raid0", 00:09:00.836 "superblock": true, 00:09:00.836 "num_base_bdevs": 3, 00:09:00.836 "num_base_bdevs_discovered": 3, 00:09:00.836 "num_base_bdevs_operational": 3, 00:09:00.836 "base_bdevs_list": [ 00:09:00.836 { 00:09:00.836 "name": "NewBaseBdev", 00:09:00.836 "uuid": "1b81fb25-9956-4ed4-9fc6-6d43eb9e3a4a", 00:09:00.836 "is_configured": true, 00:09:00.836 "data_offset": 2048, 00:09:00.836 "data_size": 63488 00:09:00.836 }, 00:09:00.836 { 00:09:00.836 "name": "BaseBdev2", 00:09:00.836 "uuid": "06a5a6e9-3adf-464b-98c0-b81b00195b39", 00:09:00.836 "is_configured": true, 00:09:00.836 "data_offset": 2048, 00:09:00.836 "data_size": 63488 00:09:00.836 }, 00:09:00.836 { 00:09:00.836 "name": "BaseBdev3", 00:09:00.836 "uuid": "58acd982-d463-4efd-ae75-6b936bfa3ffe", 00:09:00.836 "is_configured": true, 00:09:00.836 "data_offset": 2048, 00:09:00.836 "data_size": 63488 00:09:00.836 } 00:09:00.836 ] 00:09:00.836 } 00:09:00.836 } 00:09:00.836 }' 00:09:00.836 16:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.836 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:00.836 BaseBdev2 00:09:00.836 BaseBdev3' 00:09:00.836 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.836 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.836 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.836 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:00.836 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.836 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.837 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.097 [2024-12-12 16:05:27.215202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.097 [2024-12-12 16:05:27.215246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.097 [2024-12-12 16:05:27.215358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.097 [2024-12-12 16:05:27.215422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.097 [2024-12-12 16:05:27.215435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66464 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66464 ']' 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66464 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66464 00:09:01.097 killing process with pid 66464 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66464' 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66464 00:09:01.097 [2024-12-12 16:05:27.264064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.097 16:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66464 00:09:01.357 [2024-12-12 16:05:27.597333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.748 16:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:02.748 00:09:02.748 real 0m10.744s 00:09:02.748 user 0m16.718s 00:09:02.748 sys 0m1.906s 00:09:02.748 16:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.748 16:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.748 ************************************ 00:09:02.748 END TEST raid_state_function_test_sb 00:09:02.748 ************************************ 00:09:02.748 16:05:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:02.748 16:05:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:02.748 16:05:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.748 16:05:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.748 ************************************ 00:09:02.748 START TEST raid_superblock_test 00:09:02.748 ************************************ 00:09:02.748 16:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:02.748 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:02.748 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:02.748 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:02.748 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:02.748 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67084 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67084 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67084 ']' 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.749 16:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.749 [2024-12-12 16:05:29.072634] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:02.749 [2024-12-12 16:05:29.072855] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67084 ] 00:09:03.008 [2024-12-12 16:05:29.250732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.268 [2024-12-12 16:05:29.401713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.527 [2024-12-12 16:05:29.643769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.527 [2024-12-12 16:05:29.643986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 malloc1 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 [2024-12-12 16:05:29.981084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.787 [2024-12-12 16:05:29.981262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.787 [2024-12-12 16:05:29.981304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:03.787 [2024-12-12 16:05:29.981314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.787 [2024-12-12 16:05:29.983870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.787 [2024-12-12 16:05:29.983926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.787 pt1 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.787 16:05:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 malloc2 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 [2024-12-12 16:05:30.044241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.787 [2024-12-12 16:05:30.044414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.787 [2024-12-12 16:05:30.044467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:03.787 [2024-12-12 16:05:30.044506] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.787 [2024-12-12 16:05:30.047137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.787 [2024-12-12 16:05:30.047220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.787 pt2 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 malloc3 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 [2024-12-12 16:05:30.126907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:03.787 [2024-12-12 16:05:30.127081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.787 [2024-12-12 16:05:30.127126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:03.787 [2024-12-12 16:05:30.127157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.787 [2024-12-12 16:05:30.129977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.787 [2024-12-12 16:05:30.130055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:03.787 pt3 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.787 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.047 [2024-12-12 16:05:30.138983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.047 [2024-12-12 16:05:30.141155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.047 [2024-12-12 16:05:30.141272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:04.047 [2024-12-12 16:05:30.141476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:04.047 [2024-12-12 16:05:30.141525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.047 [2024-12-12 16:05:30.141856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:04.047 [2024-12-12 16:05:30.142119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:04.047 [2024-12-12 16:05:30.142162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:04.047 [2024-12-12 16:05:30.142391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.047 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.047 "name": "raid_bdev1", 00:09:04.047 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:04.047 "strip_size_kb": 64, 00:09:04.047 "state": "online", 00:09:04.047 "raid_level": "raid0", 00:09:04.047 "superblock": true, 00:09:04.047 "num_base_bdevs": 3, 00:09:04.047 "num_base_bdevs_discovered": 3, 00:09:04.047 "num_base_bdevs_operational": 3, 00:09:04.047 "base_bdevs_list": [ 00:09:04.047 { 00:09:04.048 "name": "pt1", 00:09:04.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.048 "is_configured": true, 00:09:04.048 "data_offset": 2048, 00:09:04.048 "data_size": 63488 00:09:04.048 }, 00:09:04.048 { 00:09:04.048 "name": "pt2", 00:09:04.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.048 "is_configured": true, 00:09:04.048 "data_offset": 2048, 00:09:04.048 "data_size": 63488 00:09:04.048 }, 00:09:04.048 { 00:09:04.048 "name": "pt3", 00:09:04.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.048 "is_configured": true, 00:09:04.048 "data_offset": 2048, 00:09:04.048 "data_size": 63488 00:09:04.048 } 00:09:04.048 ] 00:09:04.048 }' 00:09:04.048 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.048 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.308 [2024-12-12 16:05:30.550604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.308 "name": "raid_bdev1", 00:09:04.308 "aliases": [ 00:09:04.308 "a9080661-5e86-4c0d-9795-68b029b11d57" 00:09:04.308 ], 00:09:04.308 "product_name": "Raid Volume", 00:09:04.308 "block_size": 512, 00:09:04.308 "num_blocks": 190464, 00:09:04.308 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:04.308 "assigned_rate_limits": { 00:09:04.308 "rw_ios_per_sec": 0, 00:09:04.308 "rw_mbytes_per_sec": 0, 00:09:04.308 "r_mbytes_per_sec": 0, 00:09:04.308 "w_mbytes_per_sec": 0 00:09:04.308 }, 00:09:04.308 "claimed": false, 00:09:04.308 "zoned": false, 00:09:04.308 "supported_io_types": { 00:09:04.308 "read": true, 00:09:04.308 "write": true, 00:09:04.308 "unmap": true, 00:09:04.308 "flush": true, 00:09:04.308 "reset": true, 00:09:04.308 "nvme_admin": false, 00:09:04.308 "nvme_io": false, 00:09:04.308 "nvme_io_md": false, 00:09:04.308 "write_zeroes": true, 00:09:04.308 "zcopy": false, 00:09:04.308 "get_zone_info": false, 00:09:04.308 "zone_management": false, 00:09:04.308 "zone_append": false, 00:09:04.308 "compare": false, 00:09:04.308 "compare_and_write": false, 00:09:04.308 "abort": false, 00:09:04.308 "seek_hole": false, 00:09:04.308 "seek_data": false, 00:09:04.308 "copy": false, 00:09:04.308 "nvme_iov_md": false 00:09:04.308 }, 00:09:04.308 "memory_domains": [ 00:09:04.308 { 00:09:04.308 "dma_device_id": "system", 00:09:04.308 "dma_device_type": 1 00:09:04.308 }, 00:09:04.308 { 00:09:04.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.308 "dma_device_type": 2 00:09:04.308 }, 00:09:04.308 { 00:09:04.308 "dma_device_id": "system", 00:09:04.308 "dma_device_type": 1 00:09:04.308 }, 00:09:04.308 { 00:09:04.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.308 "dma_device_type": 2 00:09:04.308 }, 00:09:04.308 { 00:09:04.308 "dma_device_id": "system", 00:09:04.308 "dma_device_type": 1 00:09:04.308 }, 00:09:04.308 { 00:09:04.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.308 "dma_device_type": 2 00:09:04.308 } 00:09:04.308 ], 00:09:04.308 "driver_specific": { 00:09:04.308 "raid": { 00:09:04.308 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:04.308 "strip_size_kb": 64, 00:09:04.308 "state": "online", 00:09:04.308 "raid_level": "raid0", 00:09:04.308 "superblock": true, 00:09:04.308 "num_base_bdevs": 3, 00:09:04.308 "num_base_bdevs_discovered": 3, 00:09:04.308 "num_base_bdevs_operational": 3, 00:09:04.308 "base_bdevs_list": [ 00:09:04.308 { 00:09:04.308 "name": "pt1", 00:09:04.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.308 "is_configured": true, 00:09:04.308 "data_offset": 2048, 00:09:04.308 "data_size": 63488 00:09:04.308 }, 00:09:04.308 { 00:09:04.308 "name": "pt2", 00:09:04.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.308 "is_configured": true, 00:09:04.308 "data_offset": 2048, 00:09:04.308 "data_size": 63488 00:09:04.308 }, 00:09:04.308 { 00:09:04.308 "name": "pt3", 00:09:04.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.308 "is_configured": true, 00:09:04.308 "data_offset": 2048, 00:09:04.308 "data_size": 63488 00:09:04.308 } 00:09:04.308 ] 00:09:04.308 } 00:09:04.308 } 00:09:04.308 }' 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:04.308 pt2 00:09:04.308 pt3' 00:09:04.308 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.569 [2024-12-12 16:05:30.850121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a9080661-5e86-4c0d-9795-68b029b11d57 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a9080661-5e86-4c0d-9795-68b029b11d57 ']' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.569 [2024-12-12 16:05:30.881693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.569 [2024-12-12 16:05:30.881822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.569 [2024-12-12 16:05:30.881966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.569 [2024-12-12 16:05:30.882050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.569 [2024-12-12 16:05:30.882063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.569 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.829 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 16:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 [2024-12-12 16:05:31.033526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:04.830 [2024-12-12 16:05:31.035754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:04.830 [2024-12-12 16:05:31.035819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:04.830 [2024-12-12 16:05:31.035884] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:04.830 [2024-12-12 16:05:31.035961] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:04.830 [2024-12-12 16:05:31.035980] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:04.830 [2024-12-12 16:05:31.035997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.830 [2024-12-12 16:05:31.036010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:04.830 request: 00:09:04.830 { 00:09:04.830 "name": "raid_bdev1", 00:09:04.830 "raid_level": "raid0", 00:09:04.830 "base_bdevs": [ 00:09:04.830 "malloc1", 00:09:04.830 "malloc2", 00:09:04.830 "malloc3" 00:09:04.830 ], 00:09:04.830 "strip_size_kb": 64, 00:09:04.830 "superblock": false, 00:09:04.830 "method": "bdev_raid_create", 00:09:04.830 "req_id": 1 00:09:04.830 } 00:09:04.830 Got JSON-RPC error response 00:09:04.830 response: 00:09:04.830 { 00:09:04.830 "code": -17, 00:09:04.830 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:04.830 } 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 [2024-12-12 16:05:31.101313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.830 [2024-12-12 16:05:31.101498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.830 [2024-12-12 16:05:31.101541] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:04.830 [2024-12-12 16:05:31.101578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.830 [2024-12-12 16:05:31.104204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.830 [2024-12-12 16:05:31.104305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.830 [2024-12-12 16:05:31.104454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:04.830 [2024-12-12 16:05:31.104566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.830 pt1 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.830 "name": "raid_bdev1", 00:09:04.830 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:04.830 "strip_size_kb": 64, 00:09:04.830 "state": "configuring", 00:09:04.830 "raid_level": "raid0", 00:09:04.830 "superblock": true, 00:09:04.830 "num_base_bdevs": 3, 00:09:04.830 "num_base_bdevs_discovered": 1, 00:09:04.830 "num_base_bdevs_operational": 3, 00:09:04.830 "base_bdevs_list": [ 00:09:04.830 { 00:09:04.830 "name": "pt1", 00:09:04.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.830 "is_configured": true, 00:09:04.830 "data_offset": 2048, 00:09:04.830 "data_size": 63488 00:09:04.830 }, 00:09:04.830 { 00:09:04.830 "name": null, 00:09:04.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.830 "is_configured": false, 00:09:04.830 "data_offset": 2048, 00:09:04.830 "data_size": 63488 00:09:04.830 }, 00:09:04.830 { 00:09:04.830 "name": null, 00:09:04.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.830 "is_configured": false, 00:09:04.830 "data_offset": 2048, 00:09:04.830 "data_size": 63488 00:09:04.830 } 00:09:04.830 ] 00:09:04.830 }' 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.830 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 [2024-12-12 16:05:31.512628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.402 [2024-12-12 16:05:31.512740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.402 [2024-12-12 16:05:31.512770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:05.402 [2024-12-12 16:05:31.512781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.402 [2024-12-12 16:05:31.513336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.402 [2024-12-12 16:05:31.513355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.402 [2024-12-12 16:05:31.513464] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.402 [2024-12-12 16:05:31.513497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.402 pt2 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 [2024-12-12 16:05:31.520654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.402 "name": "raid_bdev1", 00:09:05.402 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:05.402 "strip_size_kb": 64, 00:09:05.402 "state": "configuring", 00:09:05.402 "raid_level": "raid0", 00:09:05.402 "superblock": true, 00:09:05.402 "num_base_bdevs": 3, 00:09:05.402 "num_base_bdevs_discovered": 1, 00:09:05.402 "num_base_bdevs_operational": 3, 00:09:05.402 "base_bdevs_list": [ 00:09:05.402 { 00:09:05.402 "name": "pt1", 00:09:05.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.402 "is_configured": true, 00:09:05.402 "data_offset": 2048, 00:09:05.402 "data_size": 63488 00:09:05.402 }, 00:09:05.402 { 00:09:05.402 "name": null, 00:09:05.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.402 "is_configured": false, 00:09:05.402 "data_offset": 0, 00:09:05.402 "data_size": 63488 00:09:05.402 }, 00:09:05.402 { 00:09:05.402 "name": null, 00:09:05.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.402 "is_configured": false, 00:09:05.402 "data_offset": 2048, 00:09:05.402 "data_size": 63488 00:09:05.402 } 00:09:05.402 ] 00:09:05.402 }' 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.402 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.661 [2024-12-12 16:05:31.923990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.661 [2024-12-12 16:05:31.924191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.661 [2024-12-12 16:05:31.924238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:05.661 [2024-12-12 16:05:31.924279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.661 [2024-12-12 16:05:31.924929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.661 [2024-12-12 16:05:31.925007] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.661 [2024-12-12 16:05:31.925151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.661 [2024-12-12 16:05:31.925215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.661 pt2 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.661 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.661 [2024-12-12 16:05:31.935939] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:05.661 [2024-12-12 16:05:31.936053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.661 [2024-12-12 16:05:31.936091] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:05.661 [2024-12-12 16:05:31.936129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.661 [2024-12-12 16:05:31.936676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.661 [2024-12-12 16:05:31.936764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:05.661 [2024-12-12 16:05:31.936887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:05.661 [2024-12-12 16:05:31.936963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:05.661 [2024-12-12 16:05:31.937139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.661 [2024-12-12 16:05:31.937153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.661 [2024-12-12 16:05:31.937456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:05.661 [2024-12-12 16:05:31.937632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.662 [2024-12-12 16:05:31.937641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:05.662 [2024-12-12 16:05:31.937814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.662 pt3 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.662 "name": "raid_bdev1", 00:09:05.662 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:05.662 "strip_size_kb": 64, 00:09:05.662 "state": "online", 00:09:05.662 "raid_level": "raid0", 00:09:05.662 "superblock": true, 00:09:05.662 "num_base_bdevs": 3, 00:09:05.662 "num_base_bdevs_discovered": 3, 00:09:05.662 "num_base_bdevs_operational": 3, 00:09:05.662 "base_bdevs_list": [ 00:09:05.662 { 00:09:05.662 "name": "pt1", 00:09:05.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.662 "is_configured": true, 00:09:05.662 "data_offset": 2048, 00:09:05.662 "data_size": 63488 00:09:05.662 }, 00:09:05.662 { 00:09:05.662 "name": "pt2", 00:09:05.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.662 "is_configured": true, 00:09:05.662 "data_offset": 2048, 00:09:05.662 "data_size": 63488 00:09:05.662 }, 00:09:05.662 { 00:09:05.662 "name": "pt3", 00:09:05.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.662 "is_configured": true, 00:09:05.662 "data_offset": 2048, 00:09:05.662 "data_size": 63488 00:09:05.662 } 00:09:05.662 ] 00:09:05.662 }' 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.662 16:05:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.231 [2024-12-12 16:05:32.367576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.231 "name": "raid_bdev1", 00:09:06.231 "aliases": [ 00:09:06.231 "a9080661-5e86-4c0d-9795-68b029b11d57" 00:09:06.231 ], 00:09:06.231 "product_name": "Raid Volume", 00:09:06.231 "block_size": 512, 00:09:06.231 "num_blocks": 190464, 00:09:06.231 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:06.231 "assigned_rate_limits": { 00:09:06.231 "rw_ios_per_sec": 0, 00:09:06.231 "rw_mbytes_per_sec": 0, 00:09:06.231 "r_mbytes_per_sec": 0, 00:09:06.231 "w_mbytes_per_sec": 0 00:09:06.231 }, 00:09:06.231 "claimed": false, 00:09:06.231 "zoned": false, 00:09:06.231 "supported_io_types": { 00:09:06.231 "read": true, 00:09:06.231 "write": true, 00:09:06.231 "unmap": true, 00:09:06.231 "flush": true, 00:09:06.231 "reset": true, 00:09:06.231 "nvme_admin": false, 00:09:06.231 "nvme_io": false, 00:09:06.231 "nvme_io_md": false, 00:09:06.231 "write_zeroes": true, 00:09:06.231 "zcopy": false, 00:09:06.231 "get_zone_info": false, 00:09:06.231 "zone_management": false, 00:09:06.231 "zone_append": false, 00:09:06.231 "compare": false, 00:09:06.231 "compare_and_write": false, 00:09:06.231 "abort": false, 00:09:06.231 "seek_hole": false, 00:09:06.231 "seek_data": false, 00:09:06.231 "copy": false, 00:09:06.231 "nvme_iov_md": false 00:09:06.231 }, 00:09:06.231 "memory_domains": [ 00:09:06.231 { 00:09:06.231 "dma_device_id": "system", 00:09:06.231 "dma_device_type": 1 00:09:06.231 }, 00:09:06.231 { 00:09:06.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.231 "dma_device_type": 2 00:09:06.231 }, 00:09:06.231 { 00:09:06.231 "dma_device_id": "system", 00:09:06.231 "dma_device_type": 1 00:09:06.231 }, 00:09:06.231 { 00:09:06.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.231 "dma_device_type": 2 00:09:06.231 }, 00:09:06.231 { 00:09:06.231 "dma_device_id": "system", 00:09:06.231 "dma_device_type": 1 00:09:06.231 }, 00:09:06.231 { 00:09:06.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.231 "dma_device_type": 2 00:09:06.231 } 00:09:06.231 ], 00:09:06.231 "driver_specific": { 00:09:06.231 "raid": { 00:09:06.231 "uuid": "a9080661-5e86-4c0d-9795-68b029b11d57", 00:09:06.231 "strip_size_kb": 64, 00:09:06.231 "state": "online", 00:09:06.231 "raid_level": "raid0", 00:09:06.231 "superblock": true, 00:09:06.231 "num_base_bdevs": 3, 00:09:06.231 "num_base_bdevs_discovered": 3, 00:09:06.231 "num_base_bdevs_operational": 3, 00:09:06.231 "base_bdevs_list": [ 00:09:06.231 { 00:09:06.231 "name": "pt1", 00:09:06.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.231 "is_configured": true, 00:09:06.231 "data_offset": 2048, 00:09:06.231 "data_size": 63488 00:09:06.231 }, 00:09:06.231 { 00:09:06.231 "name": "pt2", 00:09:06.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.231 "is_configured": true, 00:09:06.231 "data_offset": 2048, 00:09:06.231 "data_size": 63488 00:09:06.231 }, 00:09:06.231 { 00:09:06.231 "name": "pt3", 00:09:06.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.231 "is_configured": true, 00:09:06.231 "data_offset": 2048, 00:09:06.231 "data_size": 63488 00:09:06.231 } 00:09:06.231 ] 00:09:06.231 } 00:09:06.231 } 00:09:06.231 }' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:06.231 pt2 00:09:06.231 pt3' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.231 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.232 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.232 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.232 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.232 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:06.232 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.232 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.232 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.491 [2024-12-12 16:05:32.635102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a9080661-5e86-4c0d-9795-68b029b11d57 '!=' a9080661-5e86-4c0d-9795-68b029b11d57 ']' 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67084 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67084 ']' 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67084 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.491 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67084 00:09:06.492 killing process with pid 67084 00:09:06.492 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.492 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.492 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67084' 00:09:06.492 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67084 00:09:06.492 [2024-12-12 16:05:32.713496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.492 16:05:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67084 00:09:06.492 [2024-12-12 16:05:32.713652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.492 [2024-12-12 16:05:32.713731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.492 [2024-12-12 16:05:32.713745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:06.750 [2024-12-12 16:05:33.065400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.129 16:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:08.129 00:09:08.129 real 0m5.356s 00:09:08.129 user 0m7.443s 00:09:08.129 sys 0m0.971s 00:09:08.129 16:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.129 16:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.129 ************************************ 00:09:08.129 END TEST raid_superblock_test 00:09:08.129 ************************************ 00:09:08.129 16:05:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:08.129 16:05:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:08.129 16:05:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.129 16:05:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.129 ************************************ 00:09:08.129 START TEST raid_read_error_test 00:09:08.129 ************************************ 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WZPqBFDyXv 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67343 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67343 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67343 ']' 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.129 16:05:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.388 [2024-12-12 16:05:34.513038] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:08.388 [2024-12-12 16:05:34.513150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67343 ] 00:09:08.388 [2024-12-12 16:05:34.687265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.657 [2024-12-12 16:05:34.830066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.933 [2024-12-12 16:05:35.078936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.933 [2024-12-12 16:05:35.079018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.191 BaseBdev1_malloc 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.191 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.192 true 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.192 [2024-12-12 16:05:35.411883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.192 [2024-12-12 16:05:35.411967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.192 [2024-12-12 16:05:35.411990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:09.192 [2024-12-12 16:05:35.412001] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.192 [2024-12-12 16:05:35.414322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.192 [2024-12-12 16:05:35.414359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:09.192 BaseBdev1 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.192 BaseBdev2_malloc 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.192 true 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.192 [2024-12-12 16:05:35.485816] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:09.192 [2024-12-12 16:05:35.485874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.192 [2024-12-12 16:05:35.485902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:09.192 [2024-12-12 16:05:35.485915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.192 [2024-12-12 16:05:35.488277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.192 [2024-12-12 16:05:35.488316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:09.192 BaseBdev2 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.192 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.451 BaseBdev3_malloc 00:09:09.451 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.451 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:09.451 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.451 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.451 true 00:09:09.451 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.452 [2024-12-12 16:05:35.572143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:09.452 [2024-12-12 16:05:35.572198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.452 [2024-12-12 16:05:35.572216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:09.452 [2024-12-12 16:05:35.572229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.452 [2024-12-12 16:05:35.574563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.452 [2024-12-12 16:05:35.574600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:09.452 BaseBdev3 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.452 [2024-12-12 16:05:35.584208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.452 [2024-12-12 16:05:35.586316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.452 [2024-12-12 16:05:35.586391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.452 [2024-12-12 16:05:35.586590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:09.452 [2024-12-12 16:05:35.586606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:09.452 [2024-12-12 16:05:35.586848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:09.452 [2024-12-12 16:05:35.587033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:09.452 [2024-12-12 16:05:35.587054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:09.452 [2024-12-12 16:05:35.587200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.452 "name": "raid_bdev1", 00:09:09.452 "uuid": "9f6cd0d4-3cf2-436a-93cd-fd5c2a8f9176", 00:09:09.452 "strip_size_kb": 64, 00:09:09.452 "state": "online", 00:09:09.452 "raid_level": "raid0", 00:09:09.452 "superblock": true, 00:09:09.452 "num_base_bdevs": 3, 00:09:09.452 "num_base_bdevs_discovered": 3, 00:09:09.452 "num_base_bdevs_operational": 3, 00:09:09.452 "base_bdevs_list": [ 00:09:09.452 { 00:09:09.452 "name": "BaseBdev1", 00:09:09.452 "uuid": "6423f734-e1d3-57ce-8c33-7ef4f9224baa", 00:09:09.452 "is_configured": true, 00:09:09.452 "data_offset": 2048, 00:09:09.452 "data_size": 63488 00:09:09.452 }, 00:09:09.452 { 00:09:09.452 "name": "BaseBdev2", 00:09:09.452 "uuid": "35f86928-5b0b-5bb1-8e3e-97acafec0800", 00:09:09.452 "is_configured": true, 00:09:09.452 "data_offset": 2048, 00:09:09.452 "data_size": 63488 00:09:09.452 }, 00:09:09.452 { 00:09:09.452 "name": "BaseBdev3", 00:09:09.452 "uuid": "c4d8072b-e444-5667-9824-925f16b45399", 00:09:09.452 "is_configured": true, 00:09:09.452 "data_offset": 2048, 00:09:09.452 "data_size": 63488 00:09:09.452 } 00:09:09.452 ] 00:09:09.452 }' 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.452 16:05:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.710 16:05:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:09.710 16:05:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:09.969 [2024-12-12 16:05:36.112895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.907 "name": "raid_bdev1", 00:09:10.907 "uuid": "9f6cd0d4-3cf2-436a-93cd-fd5c2a8f9176", 00:09:10.907 "strip_size_kb": 64, 00:09:10.907 "state": "online", 00:09:10.907 "raid_level": "raid0", 00:09:10.907 "superblock": true, 00:09:10.907 "num_base_bdevs": 3, 00:09:10.907 "num_base_bdevs_discovered": 3, 00:09:10.907 "num_base_bdevs_operational": 3, 00:09:10.907 "base_bdevs_list": [ 00:09:10.907 { 00:09:10.907 "name": "BaseBdev1", 00:09:10.907 "uuid": "6423f734-e1d3-57ce-8c33-7ef4f9224baa", 00:09:10.907 "is_configured": true, 00:09:10.907 "data_offset": 2048, 00:09:10.907 "data_size": 63488 00:09:10.907 }, 00:09:10.907 { 00:09:10.907 "name": "BaseBdev2", 00:09:10.907 "uuid": "35f86928-5b0b-5bb1-8e3e-97acafec0800", 00:09:10.907 "is_configured": true, 00:09:10.907 "data_offset": 2048, 00:09:10.907 "data_size": 63488 00:09:10.907 }, 00:09:10.907 { 00:09:10.907 "name": "BaseBdev3", 00:09:10.907 "uuid": "c4d8072b-e444-5667-9824-925f16b45399", 00:09:10.907 "is_configured": true, 00:09:10.907 "data_offset": 2048, 00:09:10.907 "data_size": 63488 00:09:10.907 } 00:09:10.907 ] 00:09:10.907 }' 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.907 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.167 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.168 [2024-12-12 16:05:37.453869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.168 [2024-12-12 16:05:37.453932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.168 [2024-12-12 16:05:37.456600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.168 [2024-12-12 16:05:37.456652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.168 [2024-12-12 16:05:37.456693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.168 [2024-12-12 16:05:37.456703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:11.168 { 00:09:11.168 "results": [ 00:09:11.168 { 00:09:11.168 "job": "raid_bdev1", 00:09:11.168 "core_mask": "0x1", 00:09:11.168 "workload": "randrw", 00:09:11.168 "percentage": 50, 00:09:11.168 "status": "finished", 00:09:11.168 "queue_depth": 1, 00:09:11.168 "io_size": 131072, 00:09:11.168 "runtime": 1.341514, 00:09:11.168 "iops": 13226.101255745374, 00:09:11.168 "mibps": 1653.2626569681718, 00:09:11.168 "io_failed": 1, 00:09:11.168 "io_timeout": 0, 00:09:11.168 "avg_latency_us": 106.03670327333724, 00:09:11.168 "min_latency_us": 23.36419213973799, 00:09:11.168 "max_latency_us": 1495.3082969432314 00:09:11.168 } 00:09:11.168 ], 00:09:11.168 "core_count": 1 00:09:11.168 } 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67343 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67343 ']' 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67343 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67343 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.168 killing process with pid 67343 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67343' 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67343 00:09:11.168 [2024-12-12 16:05:37.501528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.168 16:05:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67343 00:09:11.427 [2024-12-12 16:05:37.753739] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WZPqBFDyXv 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:12.808 00:09:12.808 real 0m4.683s 00:09:12.808 user 0m5.390s 00:09:12.808 sys 0m0.677s 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.808 16:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.808 ************************************ 00:09:12.808 END TEST raid_read_error_test 00:09:12.808 ************************************ 00:09:12.808 16:05:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:12.808 16:05:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.808 16:05:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.808 16:05:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.808 ************************************ 00:09:12.808 START TEST raid_write_error_test 00:09:12.808 ************************************ 00:09:12.808 16:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:12.808 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:12.808 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:12.808 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9NzHxwosR4 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67488 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67488 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67488 ']' 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.068 16:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.068 [2024-12-12 16:05:39.259122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:13.068 [2024-12-12 16:05:39.259236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67488 ] 00:09:13.328 [2024-12-12 16:05:39.434509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.328 [2024-12-12 16:05:39.575028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.588 [2024-12-12 16:05:39.812858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.588 [2024-12-12 16:05:39.812945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.848 BaseBdev1_malloc 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.848 true 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.848 [2024-12-12 16:05:40.142745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.848 [2024-12-12 16:05:40.142821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.848 [2024-12-12 16:05:40.142843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.848 [2024-12-12 16:05:40.142855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.848 [2024-12-12 16:05:40.145354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.848 [2024-12-12 16:05:40.145395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.848 BaseBdev1 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.848 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.108 BaseBdev2_malloc 00:09:14.108 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.109 true 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.109 [2024-12-12 16:05:40.221831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.109 [2024-12-12 16:05:40.221923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.109 [2024-12-12 16:05:40.221944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.109 [2024-12-12 16:05:40.221958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.109 [2024-12-12 16:05:40.224473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.109 [2024-12-12 16:05:40.224515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.109 BaseBdev2 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.109 BaseBdev3_malloc 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.109 true 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.109 [2024-12-12 16:05:40.315707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:14.109 [2024-12-12 16:05:40.315778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.109 [2024-12-12 16:05:40.315799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.109 [2024-12-12 16:05:40.315812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.109 [2024-12-12 16:05:40.318329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.109 [2024-12-12 16:05:40.318368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:14.109 BaseBdev3 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.109 [2024-12-12 16:05:40.327801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.109 [2024-12-12 16:05:40.329993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.109 [2024-12-12 16:05:40.330081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.109 [2024-12-12 16:05:40.330313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:14.109 [2024-12-12 16:05:40.330334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.109 [2024-12-12 16:05:40.330626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:14.109 [2024-12-12 16:05:40.330815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:14.109 [2024-12-12 16:05:40.330835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:14.109 [2024-12-12 16:05:40.331054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.109 "name": "raid_bdev1", 00:09:14.109 "uuid": "51053d7e-30c5-469f-9892-84a3135bb84a", 00:09:14.109 "strip_size_kb": 64, 00:09:14.109 "state": "online", 00:09:14.109 "raid_level": "raid0", 00:09:14.109 "superblock": true, 00:09:14.109 "num_base_bdevs": 3, 00:09:14.109 "num_base_bdevs_discovered": 3, 00:09:14.109 "num_base_bdevs_operational": 3, 00:09:14.109 "base_bdevs_list": [ 00:09:14.109 { 00:09:14.109 "name": "BaseBdev1", 00:09:14.109 "uuid": "ae90e3df-1661-58db-ae2c-e51db6a9db6e", 00:09:14.109 "is_configured": true, 00:09:14.109 "data_offset": 2048, 00:09:14.109 "data_size": 63488 00:09:14.109 }, 00:09:14.109 { 00:09:14.109 "name": "BaseBdev2", 00:09:14.109 "uuid": "3bfb227e-b00a-5ca7-9fcb-a8a8d852616b", 00:09:14.109 "is_configured": true, 00:09:14.109 "data_offset": 2048, 00:09:14.109 "data_size": 63488 00:09:14.109 }, 00:09:14.109 { 00:09:14.109 "name": "BaseBdev3", 00:09:14.109 "uuid": "a85b9b03-d3be-52b0-ba7c-b5d9cce3dac4", 00:09:14.109 "is_configured": true, 00:09:14.109 "data_offset": 2048, 00:09:14.109 "data_size": 63488 00:09:14.109 } 00:09:14.109 ] 00:09:14.109 }' 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.109 16:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.369 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.369 16:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.629 [2024-12-12 16:05:40.844462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.568 "name": "raid_bdev1", 00:09:15.568 "uuid": "51053d7e-30c5-469f-9892-84a3135bb84a", 00:09:15.568 "strip_size_kb": 64, 00:09:15.568 "state": "online", 00:09:15.568 "raid_level": "raid0", 00:09:15.568 "superblock": true, 00:09:15.568 "num_base_bdevs": 3, 00:09:15.568 "num_base_bdevs_discovered": 3, 00:09:15.568 "num_base_bdevs_operational": 3, 00:09:15.568 "base_bdevs_list": [ 00:09:15.568 { 00:09:15.568 "name": "BaseBdev1", 00:09:15.568 "uuid": "ae90e3df-1661-58db-ae2c-e51db6a9db6e", 00:09:15.568 "is_configured": true, 00:09:15.568 "data_offset": 2048, 00:09:15.568 "data_size": 63488 00:09:15.568 }, 00:09:15.568 { 00:09:15.568 "name": "BaseBdev2", 00:09:15.568 "uuid": "3bfb227e-b00a-5ca7-9fcb-a8a8d852616b", 00:09:15.568 "is_configured": true, 00:09:15.568 "data_offset": 2048, 00:09:15.568 "data_size": 63488 00:09:15.568 }, 00:09:15.568 { 00:09:15.568 "name": "BaseBdev3", 00:09:15.568 "uuid": "a85b9b03-d3be-52b0-ba7c-b5d9cce3dac4", 00:09:15.568 "is_configured": true, 00:09:15.568 "data_offset": 2048, 00:09:15.568 "data_size": 63488 00:09:15.568 } 00:09:15.568 ] 00:09:15.568 }' 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.568 16:05:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.841 [2024-12-12 16:05:42.112471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.841 [2024-12-12 16:05:42.112521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.841 [2024-12-12 16:05:42.115177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.841 [2024-12-12 16:05:42.115229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.841 [2024-12-12 16:05:42.115275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.841 [2024-12-12 16:05:42.115285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:15.841 { 00:09:15.841 "results": [ 00:09:15.841 { 00:09:15.841 "job": "raid_bdev1", 00:09:15.841 "core_mask": "0x1", 00:09:15.841 "workload": "randrw", 00:09:15.841 "percentage": 50, 00:09:15.841 "status": "finished", 00:09:15.841 "queue_depth": 1, 00:09:15.841 "io_size": 131072, 00:09:15.841 "runtime": 1.268423, 00:09:15.841 "iops": 13301.556342008935, 00:09:15.841 "mibps": 1662.6945427511168, 00:09:15.841 "io_failed": 1, 00:09:15.841 "io_timeout": 0, 00:09:15.841 "avg_latency_us": 105.61017578793746, 00:09:15.841 "min_latency_us": 27.388646288209607, 00:09:15.841 "max_latency_us": 1452.380786026201 00:09:15.841 } 00:09:15.841 ], 00:09:15.841 "core_count": 1 00:09:15.841 } 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67488 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67488 ']' 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67488 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67488 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.841 killing process with pid 67488 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67488' 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67488 00:09:15.841 [2024-12-12 16:05:42.144269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.841 16:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67488 00:09:16.106 [2024-12-12 16:05:42.411775] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.487 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.487 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9NzHxwosR4 00:09:17.487 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.488 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:09:17.488 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:17.488 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.488 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.488 16:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:09:17.488 00:09:17.488 real 0m4.593s 00:09:17.488 user 0m5.255s 00:09:17.488 sys 0m0.632s 00:09:17.488 16:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.488 16:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.488 ************************************ 00:09:17.488 END TEST raid_write_error_test 00:09:17.488 ************************************ 00:09:17.488 16:05:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:17.488 16:05:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:17.488 16:05:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:17.488 16:05:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.488 16:05:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.488 ************************************ 00:09:17.488 START TEST raid_state_function_test 00:09:17.488 ************************************ 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67632 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67632' 00:09:17.488 Process raid pid: 67632 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67632 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67632 ']' 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.488 16:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.748 [2024-12-12 16:05:43.911519] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:17.748 [2024-12-12 16:05:43.912059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.748 [2024-12-12 16:05:44.090515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.008 [2024-12-12 16:05:44.230986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.267 [2024-12-12 16:05:44.472015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.267 [2024-12-12 16:05:44.472066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.527 [2024-12-12 16:05:44.754495] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.527 [2024-12-12 16:05:44.754561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.527 [2024-12-12 16:05:44.754572] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.527 [2024-12-12 16:05:44.754583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.527 [2024-12-12 16:05:44.754589] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.527 [2024-12-12 16:05:44.754600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.527 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.528 "name": "Existed_Raid", 00:09:18.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.528 "strip_size_kb": 64, 00:09:18.528 "state": "configuring", 00:09:18.528 "raid_level": "concat", 00:09:18.528 "superblock": false, 00:09:18.528 "num_base_bdevs": 3, 00:09:18.528 "num_base_bdevs_discovered": 0, 00:09:18.528 "num_base_bdevs_operational": 3, 00:09:18.528 "base_bdevs_list": [ 00:09:18.528 { 00:09:18.528 "name": "BaseBdev1", 00:09:18.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.528 "is_configured": false, 00:09:18.528 "data_offset": 0, 00:09:18.528 "data_size": 0 00:09:18.528 }, 00:09:18.528 { 00:09:18.528 "name": "BaseBdev2", 00:09:18.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.528 "is_configured": false, 00:09:18.528 "data_offset": 0, 00:09:18.528 "data_size": 0 00:09:18.528 }, 00:09:18.528 { 00:09:18.528 "name": "BaseBdev3", 00:09:18.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.528 "is_configured": false, 00:09:18.528 "data_offset": 0, 00:09:18.528 "data_size": 0 00:09:18.528 } 00:09:18.528 ] 00:09:18.528 }' 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.528 16:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 [2024-12-12 16:05:45.233651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.098 [2024-12-12 16:05:45.233705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 [2024-12-12 16:05:45.245616] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.098 [2024-12-12 16:05:45.245668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.098 [2024-12-12 16:05:45.245677] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.098 [2024-12-12 16:05:45.245687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.098 [2024-12-12 16:05:45.245693] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.098 [2024-12-12 16:05:45.245702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 [2024-12-12 16:05:45.301401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.098 BaseBdev1 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 [ 00:09:19.098 { 00:09:19.098 "name": "BaseBdev1", 00:09:19.098 "aliases": [ 00:09:19.098 "fc30f3f9-a14b-4d50-99e7-eec92e08c2b7" 00:09:19.098 ], 00:09:19.098 "product_name": "Malloc disk", 00:09:19.098 "block_size": 512, 00:09:19.098 "num_blocks": 65536, 00:09:19.098 "uuid": "fc30f3f9-a14b-4d50-99e7-eec92e08c2b7", 00:09:19.098 "assigned_rate_limits": { 00:09:19.098 "rw_ios_per_sec": 0, 00:09:19.098 "rw_mbytes_per_sec": 0, 00:09:19.098 "r_mbytes_per_sec": 0, 00:09:19.098 "w_mbytes_per_sec": 0 00:09:19.098 }, 00:09:19.098 "claimed": true, 00:09:19.098 "claim_type": "exclusive_write", 00:09:19.098 "zoned": false, 00:09:19.098 "supported_io_types": { 00:09:19.098 "read": true, 00:09:19.098 "write": true, 00:09:19.098 "unmap": true, 00:09:19.098 "flush": true, 00:09:19.098 "reset": true, 00:09:19.098 "nvme_admin": false, 00:09:19.098 "nvme_io": false, 00:09:19.098 "nvme_io_md": false, 00:09:19.098 "write_zeroes": true, 00:09:19.098 "zcopy": true, 00:09:19.098 "get_zone_info": false, 00:09:19.098 "zone_management": false, 00:09:19.098 "zone_append": false, 00:09:19.098 "compare": false, 00:09:19.098 "compare_and_write": false, 00:09:19.098 "abort": true, 00:09:19.098 "seek_hole": false, 00:09:19.098 "seek_data": false, 00:09:19.098 "copy": true, 00:09:19.098 "nvme_iov_md": false 00:09:19.098 }, 00:09:19.098 "memory_domains": [ 00:09:19.098 { 00:09:19.098 "dma_device_id": "system", 00:09:19.098 "dma_device_type": 1 00:09:19.098 }, 00:09:19.098 { 00:09:19.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.098 "dma_device_type": 2 00:09:19.098 } 00:09:19.098 ], 00:09:19.098 "driver_specific": {} 00:09:19.098 } 00:09:19.098 ] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.098 "name": "Existed_Raid", 00:09:19.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.098 "strip_size_kb": 64, 00:09:19.098 "state": "configuring", 00:09:19.098 "raid_level": "concat", 00:09:19.098 "superblock": false, 00:09:19.098 "num_base_bdevs": 3, 00:09:19.098 "num_base_bdevs_discovered": 1, 00:09:19.098 "num_base_bdevs_operational": 3, 00:09:19.098 "base_bdevs_list": [ 00:09:19.098 { 00:09:19.098 "name": "BaseBdev1", 00:09:19.098 "uuid": "fc30f3f9-a14b-4d50-99e7-eec92e08c2b7", 00:09:19.098 "is_configured": true, 00:09:19.098 "data_offset": 0, 00:09:19.098 "data_size": 65536 00:09:19.098 }, 00:09:19.098 { 00:09:19.098 "name": "BaseBdev2", 00:09:19.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.098 "is_configured": false, 00:09:19.098 "data_offset": 0, 00:09:19.098 "data_size": 0 00:09:19.098 }, 00:09:19.098 { 00:09:19.098 "name": "BaseBdev3", 00:09:19.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.098 "is_configured": false, 00:09:19.098 "data_offset": 0, 00:09:19.098 "data_size": 0 00:09:19.098 } 00:09:19.098 ] 00:09:19.098 }' 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.098 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.667 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.667 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.667 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.667 [2024-12-12 16:05:45.728766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.667 [2024-12-12 16:05:45.728850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:19.667 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.668 [2024-12-12 16:05:45.740774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.668 [2024-12-12 16:05:45.742840] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.668 [2024-12-12 16:05:45.742887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.668 [2024-12-12 16:05:45.742926] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.668 [2024-12-12 16:05:45.742938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.668 "name": "Existed_Raid", 00:09:19.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.668 "strip_size_kb": 64, 00:09:19.668 "state": "configuring", 00:09:19.668 "raid_level": "concat", 00:09:19.668 "superblock": false, 00:09:19.668 "num_base_bdevs": 3, 00:09:19.668 "num_base_bdevs_discovered": 1, 00:09:19.668 "num_base_bdevs_operational": 3, 00:09:19.668 "base_bdevs_list": [ 00:09:19.668 { 00:09:19.668 "name": "BaseBdev1", 00:09:19.668 "uuid": "fc30f3f9-a14b-4d50-99e7-eec92e08c2b7", 00:09:19.668 "is_configured": true, 00:09:19.668 "data_offset": 0, 00:09:19.668 "data_size": 65536 00:09:19.668 }, 00:09:19.668 { 00:09:19.668 "name": "BaseBdev2", 00:09:19.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.668 "is_configured": false, 00:09:19.668 "data_offset": 0, 00:09:19.668 "data_size": 0 00:09:19.668 }, 00:09:19.668 { 00:09:19.668 "name": "BaseBdev3", 00:09:19.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.668 "is_configured": false, 00:09:19.668 "data_offset": 0, 00:09:19.668 "data_size": 0 00:09:19.668 } 00:09:19.668 ] 00:09:19.668 }' 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.668 16:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.927 [2024-12-12 16:05:46.210329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.927 BaseBdev2 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.927 [ 00:09:19.927 { 00:09:19.927 "name": "BaseBdev2", 00:09:19.927 "aliases": [ 00:09:19.927 "17798f55-5090-4bd6-a388-58e748218be6" 00:09:19.927 ], 00:09:19.927 "product_name": "Malloc disk", 00:09:19.927 "block_size": 512, 00:09:19.927 "num_blocks": 65536, 00:09:19.927 "uuid": "17798f55-5090-4bd6-a388-58e748218be6", 00:09:19.927 "assigned_rate_limits": { 00:09:19.927 "rw_ios_per_sec": 0, 00:09:19.927 "rw_mbytes_per_sec": 0, 00:09:19.927 "r_mbytes_per_sec": 0, 00:09:19.927 "w_mbytes_per_sec": 0 00:09:19.927 }, 00:09:19.927 "claimed": true, 00:09:19.927 "claim_type": "exclusive_write", 00:09:19.927 "zoned": false, 00:09:19.927 "supported_io_types": { 00:09:19.927 "read": true, 00:09:19.927 "write": true, 00:09:19.927 "unmap": true, 00:09:19.927 "flush": true, 00:09:19.927 "reset": true, 00:09:19.927 "nvme_admin": false, 00:09:19.927 "nvme_io": false, 00:09:19.927 "nvme_io_md": false, 00:09:19.927 "write_zeroes": true, 00:09:19.927 "zcopy": true, 00:09:19.927 "get_zone_info": false, 00:09:19.927 "zone_management": false, 00:09:19.927 "zone_append": false, 00:09:19.927 "compare": false, 00:09:19.927 "compare_and_write": false, 00:09:19.927 "abort": true, 00:09:19.927 "seek_hole": false, 00:09:19.927 "seek_data": false, 00:09:19.927 "copy": true, 00:09:19.927 "nvme_iov_md": false 00:09:19.927 }, 00:09:19.927 "memory_domains": [ 00:09:19.927 { 00:09:19.927 "dma_device_id": "system", 00:09:19.927 "dma_device_type": 1 00:09:19.927 }, 00:09:19.927 { 00:09:19.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.927 "dma_device_type": 2 00:09:19.927 } 00:09:19.927 ], 00:09:19.927 "driver_specific": {} 00:09:19.927 } 00:09:19.927 ] 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.927 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.187 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.187 "name": "Existed_Raid", 00:09:20.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.187 "strip_size_kb": 64, 00:09:20.187 "state": "configuring", 00:09:20.187 "raid_level": "concat", 00:09:20.187 "superblock": false, 00:09:20.187 "num_base_bdevs": 3, 00:09:20.187 "num_base_bdevs_discovered": 2, 00:09:20.187 "num_base_bdevs_operational": 3, 00:09:20.187 "base_bdevs_list": [ 00:09:20.187 { 00:09:20.187 "name": "BaseBdev1", 00:09:20.187 "uuid": "fc30f3f9-a14b-4d50-99e7-eec92e08c2b7", 00:09:20.187 "is_configured": true, 00:09:20.187 "data_offset": 0, 00:09:20.187 "data_size": 65536 00:09:20.187 }, 00:09:20.187 { 00:09:20.187 "name": "BaseBdev2", 00:09:20.187 "uuid": "17798f55-5090-4bd6-a388-58e748218be6", 00:09:20.187 "is_configured": true, 00:09:20.187 "data_offset": 0, 00:09:20.187 "data_size": 65536 00:09:20.187 }, 00:09:20.187 { 00:09:20.187 "name": "BaseBdev3", 00:09:20.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.187 "is_configured": false, 00:09:20.187 "data_offset": 0, 00:09:20.187 "data_size": 0 00:09:20.187 } 00:09:20.187 ] 00:09:20.187 }' 00:09:20.187 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.187 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.446 [2024-12-12 16:05:46.730291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.446 [2024-12-12 16:05:46.730349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.446 [2024-12-12 16:05:46.730364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:20.446 [2024-12-12 16:05:46.730656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:20.446 [2024-12-12 16:05:46.730871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.446 [2024-12-12 16:05:46.730889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:20.446 [2024-12-12 16:05:46.731202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.446 BaseBdev3 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.446 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.447 [ 00:09:20.447 { 00:09:20.447 "name": "BaseBdev3", 00:09:20.447 "aliases": [ 00:09:20.447 "e059a452-fcc7-48f3-be36-30d63659d971" 00:09:20.447 ], 00:09:20.447 "product_name": "Malloc disk", 00:09:20.447 "block_size": 512, 00:09:20.447 "num_blocks": 65536, 00:09:20.447 "uuid": "e059a452-fcc7-48f3-be36-30d63659d971", 00:09:20.447 "assigned_rate_limits": { 00:09:20.447 "rw_ios_per_sec": 0, 00:09:20.447 "rw_mbytes_per_sec": 0, 00:09:20.447 "r_mbytes_per_sec": 0, 00:09:20.447 "w_mbytes_per_sec": 0 00:09:20.447 }, 00:09:20.447 "claimed": true, 00:09:20.447 "claim_type": "exclusive_write", 00:09:20.447 "zoned": false, 00:09:20.447 "supported_io_types": { 00:09:20.447 "read": true, 00:09:20.447 "write": true, 00:09:20.447 "unmap": true, 00:09:20.447 "flush": true, 00:09:20.447 "reset": true, 00:09:20.447 "nvme_admin": false, 00:09:20.447 "nvme_io": false, 00:09:20.447 "nvme_io_md": false, 00:09:20.447 "write_zeroes": true, 00:09:20.447 "zcopy": true, 00:09:20.447 "get_zone_info": false, 00:09:20.447 "zone_management": false, 00:09:20.447 "zone_append": false, 00:09:20.447 "compare": false, 00:09:20.447 "compare_and_write": false, 00:09:20.447 "abort": true, 00:09:20.447 "seek_hole": false, 00:09:20.447 "seek_data": false, 00:09:20.447 "copy": true, 00:09:20.447 "nvme_iov_md": false 00:09:20.447 }, 00:09:20.447 "memory_domains": [ 00:09:20.447 { 00:09:20.447 "dma_device_id": "system", 00:09:20.447 "dma_device_type": 1 00:09:20.447 }, 00:09:20.447 { 00:09:20.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.447 "dma_device_type": 2 00:09:20.447 } 00:09:20.447 ], 00:09:20.447 "driver_specific": {} 00:09:20.447 } 00:09:20.447 ] 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.447 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.706 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.706 "name": "Existed_Raid", 00:09:20.706 "uuid": "1c26c4d5-39df-4562-9b77-296b8ea2ff4a", 00:09:20.706 "strip_size_kb": 64, 00:09:20.706 "state": "online", 00:09:20.706 "raid_level": "concat", 00:09:20.706 "superblock": false, 00:09:20.706 "num_base_bdevs": 3, 00:09:20.706 "num_base_bdevs_discovered": 3, 00:09:20.706 "num_base_bdevs_operational": 3, 00:09:20.706 "base_bdevs_list": [ 00:09:20.706 { 00:09:20.706 "name": "BaseBdev1", 00:09:20.706 "uuid": "fc30f3f9-a14b-4d50-99e7-eec92e08c2b7", 00:09:20.706 "is_configured": true, 00:09:20.706 "data_offset": 0, 00:09:20.706 "data_size": 65536 00:09:20.706 }, 00:09:20.706 { 00:09:20.706 "name": "BaseBdev2", 00:09:20.706 "uuid": "17798f55-5090-4bd6-a388-58e748218be6", 00:09:20.706 "is_configured": true, 00:09:20.706 "data_offset": 0, 00:09:20.706 "data_size": 65536 00:09:20.706 }, 00:09:20.706 { 00:09:20.706 "name": "BaseBdev3", 00:09:20.706 "uuid": "e059a452-fcc7-48f3-be36-30d63659d971", 00:09:20.706 "is_configured": true, 00:09:20.706 "data_offset": 0, 00:09:20.706 "data_size": 65536 00:09:20.706 } 00:09:20.706 ] 00:09:20.706 }' 00:09:20.706 16:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.706 16:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.966 [2024-12-12 16:05:47.217926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.966 "name": "Existed_Raid", 00:09:20.966 "aliases": [ 00:09:20.966 "1c26c4d5-39df-4562-9b77-296b8ea2ff4a" 00:09:20.966 ], 00:09:20.966 "product_name": "Raid Volume", 00:09:20.966 "block_size": 512, 00:09:20.966 "num_blocks": 196608, 00:09:20.966 "uuid": "1c26c4d5-39df-4562-9b77-296b8ea2ff4a", 00:09:20.966 "assigned_rate_limits": { 00:09:20.966 "rw_ios_per_sec": 0, 00:09:20.966 "rw_mbytes_per_sec": 0, 00:09:20.966 "r_mbytes_per_sec": 0, 00:09:20.966 "w_mbytes_per_sec": 0 00:09:20.966 }, 00:09:20.966 "claimed": false, 00:09:20.966 "zoned": false, 00:09:20.966 "supported_io_types": { 00:09:20.966 "read": true, 00:09:20.966 "write": true, 00:09:20.966 "unmap": true, 00:09:20.966 "flush": true, 00:09:20.966 "reset": true, 00:09:20.966 "nvme_admin": false, 00:09:20.966 "nvme_io": false, 00:09:20.966 "nvme_io_md": false, 00:09:20.966 "write_zeroes": true, 00:09:20.966 "zcopy": false, 00:09:20.966 "get_zone_info": false, 00:09:20.966 "zone_management": false, 00:09:20.966 "zone_append": false, 00:09:20.966 "compare": false, 00:09:20.966 "compare_and_write": false, 00:09:20.966 "abort": false, 00:09:20.966 "seek_hole": false, 00:09:20.966 "seek_data": false, 00:09:20.966 "copy": false, 00:09:20.966 "nvme_iov_md": false 00:09:20.966 }, 00:09:20.966 "memory_domains": [ 00:09:20.966 { 00:09:20.966 "dma_device_id": "system", 00:09:20.966 "dma_device_type": 1 00:09:20.966 }, 00:09:20.966 { 00:09:20.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.966 "dma_device_type": 2 00:09:20.966 }, 00:09:20.966 { 00:09:20.966 "dma_device_id": "system", 00:09:20.966 "dma_device_type": 1 00:09:20.966 }, 00:09:20.966 { 00:09:20.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.966 "dma_device_type": 2 00:09:20.966 }, 00:09:20.966 { 00:09:20.966 "dma_device_id": "system", 00:09:20.966 "dma_device_type": 1 00:09:20.966 }, 00:09:20.966 { 00:09:20.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.966 "dma_device_type": 2 00:09:20.966 } 00:09:20.966 ], 00:09:20.966 "driver_specific": { 00:09:20.966 "raid": { 00:09:20.966 "uuid": "1c26c4d5-39df-4562-9b77-296b8ea2ff4a", 00:09:20.966 "strip_size_kb": 64, 00:09:20.966 "state": "online", 00:09:20.966 "raid_level": "concat", 00:09:20.966 "superblock": false, 00:09:20.966 "num_base_bdevs": 3, 00:09:20.966 "num_base_bdevs_discovered": 3, 00:09:20.966 "num_base_bdevs_operational": 3, 00:09:20.966 "base_bdevs_list": [ 00:09:20.966 { 00:09:20.966 "name": "BaseBdev1", 00:09:20.966 "uuid": "fc30f3f9-a14b-4d50-99e7-eec92e08c2b7", 00:09:20.966 "is_configured": true, 00:09:20.966 "data_offset": 0, 00:09:20.966 "data_size": 65536 00:09:20.966 }, 00:09:20.966 { 00:09:20.966 "name": "BaseBdev2", 00:09:20.966 "uuid": "17798f55-5090-4bd6-a388-58e748218be6", 00:09:20.966 "is_configured": true, 00:09:20.966 "data_offset": 0, 00:09:20.966 "data_size": 65536 00:09:20.966 }, 00:09:20.966 { 00:09:20.966 "name": "BaseBdev3", 00:09:20.966 "uuid": "e059a452-fcc7-48f3-be36-30d63659d971", 00:09:20.966 "is_configured": true, 00:09:20.966 "data_offset": 0, 00:09:20.966 "data_size": 65536 00:09:20.966 } 00:09:20.966 ] 00:09:20.966 } 00:09:20.966 } 00:09:20.966 }' 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.966 BaseBdev2 00:09:20.966 BaseBdev3' 00:09:20.966 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.226 [2024-12-12 16:05:47.461179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.226 [2024-12-12 16:05:47.461224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.226 [2024-12-12 16:05:47.461284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.226 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.227 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.486 "name": "Existed_Raid", 00:09:21.486 "uuid": "1c26c4d5-39df-4562-9b77-296b8ea2ff4a", 00:09:21.486 "strip_size_kb": 64, 00:09:21.486 "state": "offline", 00:09:21.486 "raid_level": "concat", 00:09:21.486 "superblock": false, 00:09:21.486 "num_base_bdevs": 3, 00:09:21.486 "num_base_bdevs_discovered": 2, 00:09:21.486 "num_base_bdevs_operational": 2, 00:09:21.486 "base_bdevs_list": [ 00:09:21.486 { 00:09:21.486 "name": null, 00:09:21.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.486 "is_configured": false, 00:09:21.486 "data_offset": 0, 00:09:21.486 "data_size": 65536 00:09:21.486 }, 00:09:21.486 { 00:09:21.486 "name": "BaseBdev2", 00:09:21.486 "uuid": "17798f55-5090-4bd6-a388-58e748218be6", 00:09:21.486 "is_configured": true, 00:09:21.486 "data_offset": 0, 00:09:21.486 "data_size": 65536 00:09:21.486 }, 00:09:21.486 { 00:09:21.486 "name": "BaseBdev3", 00:09:21.486 "uuid": "e059a452-fcc7-48f3-be36-30d63659d971", 00:09:21.486 "is_configured": true, 00:09:21.486 "data_offset": 0, 00:09:21.486 "data_size": 65536 00:09:21.486 } 00:09:21.486 ] 00:09:21.486 }' 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.486 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.746 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.746 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.746 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.746 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.746 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.746 16:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.746 16:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.746 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.746 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.746 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.746 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.746 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.746 [2024-12-12 16:05:48.030405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.005 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.005 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.005 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.005 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.005 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.005 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.005 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.006 [2024-12-12 16:05:48.197492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.006 [2024-12-12 16:05:48.197566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.006 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.266 BaseBdev2 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.266 [ 00:09:22.266 { 00:09:22.266 "name": "BaseBdev2", 00:09:22.266 "aliases": [ 00:09:22.266 "13d6aac6-118d-4cfe-9e36-fc7e994b47d0" 00:09:22.266 ], 00:09:22.266 "product_name": "Malloc disk", 00:09:22.266 "block_size": 512, 00:09:22.266 "num_blocks": 65536, 00:09:22.266 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:22.266 "assigned_rate_limits": { 00:09:22.266 "rw_ios_per_sec": 0, 00:09:22.266 "rw_mbytes_per_sec": 0, 00:09:22.266 "r_mbytes_per_sec": 0, 00:09:22.266 "w_mbytes_per_sec": 0 00:09:22.266 }, 00:09:22.266 "claimed": false, 00:09:22.266 "zoned": false, 00:09:22.266 "supported_io_types": { 00:09:22.266 "read": true, 00:09:22.266 "write": true, 00:09:22.266 "unmap": true, 00:09:22.266 "flush": true, 00:09:22.266 "reset": true, 00:09:22.266 "nvme_admin": false, 00:09:22.266 "nvme_io": false, 00:09:22.266 "nvme_io_md": false, 00:09:22.266 "write_zeroes": true, 00:09:22.266 "zcopy": true, 00:09:22.266 "get_zone_info": false, 00:09:22.266 "zone_management": false, 00:09:22.266 "zone_append": false, 00:09:22.266 "compare": false, 00:09:22.266 "compare_and_write": false, 00:09:22.266 "abort": true, 00:09:22.266 "seek_hole": false, 00:09:22.266 "seek_data": false, 00:09:22.266 "copy": true, 00:09:22.266 "nvme_iov_md": false 00:09:22.266 }, 00:09:22.266 "memory_domains": [ 00:09:22.266 { 00:09:22.266 "dma_device_id": "system", 00:09:22.266 "dma_device_type": 1 00:09:22.266 }, 00:09:22.266 { 00:09:22.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.266 "dma_device_type": 2 00:09:22.266 } 00:09:22.266 ], 00:09:22.266 "driver_specific": {} 00:09:22.266 } 00:09:22.266 ] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.266 BaseBdev3 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.266 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.267 [ 00:09:22.267 { 00:09:22.267 "name": "BaseBdev3", 00:09:22.267 "aliases": [ 00:09:22.267 "0a738b48-1d0e-4815-a86f-80d625703d5c" 00:09:22.267 ], 00:09:22.267 "product_name": "Malloc disk", 00:09:22.267 "block_size": 512, 00:09:22.267 "num_blocks": 65536, 00:09:22.267 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:22.267 "assigned_rate_limits": { 00:09:22.267 "rw_ios_per_sec": 0, 00:09:22.267 "rw_mbytes_per_sec": 0, 00:09:22.267 "r_mbytes_per_sec": 0, 00:09:22.267 "w_mbytes_per_sec": 0 00:09:22.267 }, 00:09:22.267 "claimed": false, 00:09:22.267 "zoned": false, 00:09:22.267 "supported_io_types": { 00:09:22.267 "read": true, 00:09:22.267 "write": true, 00:09:22.267 "unmap": true, 00:09:22.267 "flush": true, 00:09:22.267 "reset": true, 00:09:22.267 "nvme_admin": false, 00:09:22.267 "nvme_io": false, 00:09:22.267 "nvme_io_md": false, 00:09:22.267 "write_zeroes": true, 00:09:22.267 "zcopy": true, 00:09:22.267 "get_zone_info": false, 00:09:22.267 "zone_management": false, 00:09:22.267 "zone_append": false, 00:09:22.267 "compare": false, 00:09:22.267 "compare_and_write": false, 00:09:22.267 "abort": true, 00:09:22.267 "seek_hole": false, 00:09:22.267 "seek_data": false, 00:09:22.267 "copy": true, 00:09:22.267 "nvme_iov_md": false 00:09:22.267 }, 00:09:22.267 "memory_domains": [ 00:09:22.267 { 00:09:22.267 "dma_device_id": "system", 00:09:22.267 "dma_device_type": 1 00:09:22.267 }, 00:09:22.267 { 00:09:22.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.267 "dma_device_type": 2 00:09:22.267 } 00:09:22.267 ], 00:09:22.267 "driver_specific": {} 00:09:22.267 } 00:09:22.267 ] 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.267 [2024-12-12 16:05:48.544948] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.267 [2024-12-12 16:05:48.545092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.267 [2024-12-12 16:05:48.545138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.267 [2024-12-12 16:05:48.547218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.267 "name": "Existed_Raid", 00:09:22.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.267 "strip_size_kb": 64, 00:09:22.267 "state": "configuring", 00:09:22.267 "raid_level": "concat", 00:09:22.267 "superblock": false, 00:09:22.267 "num_base_bdevs": 3, 00:09:22.267 "num_base_bdevs_discovered": 2, 00:09:22.267 "num_base_bdevs_operational": 3, 00:09:22.267 "base_bdevs_list": [ 00:09:22.267 { 00:09:22.267 "name": "BaseBdev1", 00:09:22.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.267 "is_configured": false, 00:09:22.267 "data_offset": 0, 00:09:22.267 "data_size": 0 00:09:22.267 }, 00:09:22.267 { 00:09:22.267 "name": "BaseBdev2", 00:09:22.267 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:22.267 "is_configured": true, 00:09:22.267 "data_offset": 0, 00:09:22.267 "data_size": 65536 00:09:22.267 }, 00:09:22.267 { 00:09:22.267 "name": "BaseBdev3", 00:09:22.267 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:22.267 "is_configured": true, 00:09:22.267 "data_offset": 0, 00:09:22.267 "data_size": 65536 00:09:22.267 } 00:09:22.267 ] 00:09:22.267 }' 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.267 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.836 [2024-12-12 16:05:48.912368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.836 "name": "Existed_Raid", 00:09:22.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.836 "strip_size_kb": 64, 00:09:22.836 "state": "configuring", 00:09:22.836 "raid_level": "concat", 00:09:22.836 "superblock": false, 00:09:22.836 "num_base_bdevs": 3, 00:09:22.836 "num_base_bdevs_discovered": 1, 00:09:22.836 "num_base_bdevs_operational": 3, 00:09:22.836 "base_bdevs_list": [ 00:09:22.836 { 00:09:22.836 "name": "BaseBdev1", 00:09:22.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.836 "is_configured": false, 00:09:22.836 "data_offset": 0, 00:09:22.836 "data_size": 0 00:09:22.836 }, 00:09:22.836 { 00:09:22.836 "name": null, 00:09:22.836 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:22.836 "is_configured": false, 00:09:22.836 "data_offset": 0, 00:09:22.836 "data_size": 65536 00:09:22.836 }, 00:09:22.836 { 00:09:22.836 "name": "BaseBdev3", 00:09:22.836 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:22.836 "is_configured": true, 00:09:22.836 "data_offset": 0, 00:09:22.836 "data_size": 65536 00:09:22.836 } 00:09:22.836 ] 00:09:22.836 }' 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.836 16:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.097 [2024-12-12 16:05:49.442400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.097 BaseBdev1 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.097 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.357 [ 00:09:23.357 { 00:09:23.357 "name": "BaseBdev1", 00:09:23.357 "aliases": [ 00:09:23.357 "232f7409-db63-46bd-aef1-99a512cdf3a5" 00:09:23.357 ], 00:09:23.357 "product_name": "Malloc disk", 00:09:23.357 "block_size": 512, 00:09:23.357 "num_blocks": 65536, 00:09:23.357 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:23.357 "assigned_rate_limits": { 00:09:23.357 "rw_ios_per_sec": 0, 00:09:23.357 "rw_mbytes_per_sec": 0, 00:09:23.357 "r_mbytes_per_sec": 0, 00:09:23.357 "w_mbytes_per_sec": 0 00:09:23.357 }, 00:09:23.357 "claimed": true, 00:09:23.357 "claim_type": "exclusive_write", 00:09:23.357 "zoned": false, 00:09:23.357 "supported_io_types": { 00:09:23.357 "read": true, 00:09:23.357 "write": true, 00:09:23.357 "unmap": true, 00:09:23.357 "flush": true, 00:09:23.357 "reset": true, 00:09:23.357 "nvme_admin": false, 00:09:23.357 "nvme_io": false, 00:09:23.357 "nvme_io_md": false, 00:09:23.357 "write_zeroes": true, 00:09:23.357 "zcopy": true, 00:09:23.357 "get_zone_info": false, 00:09:23.357 "zone_management": false, 00:09:23.357 "zone_append": false, 00:09:23.357 "compare": false, 00:09:23.357 "compare_and_write": false, 00:09:23.357 "abort": true, 00:09:23.357 "seek_hole": false, 00:09:23.357 "seek_data": false, 00:09:23.357 "copy": true, 00:09:23.357 "nvme_iov_md": false 00:09:23.357 }, 00:09:23.357 "memory_domains": [ 00:09:23.357 { 00:09:23.357 "dma_device_id": "system", 00:09:23.357 "dma_device_type": 1 00:09:23.357 }, 00:09:23.357 { 00:09:23.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.357 "dma_device_type": 2 00:09:23.357 } 00:09:23.357 ], 00:09:23.357 "driver_specific": {} 00:09:23.357 } 00:09:23.357 ] 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.357 "name": "Existed_Raid", 00:09:23.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.357 "strip_size_kb": 64, 00:09:23.357 "state": "configuring", 00:09:23.357 "raid_level": "concat", 00:09:23.357 "superblock": false, 00:09:23.357 "num_base_bdevs": 3, 00:09:23.357 "num_base_bdevs_discovered": 2, 00:09:23.357 "num_base_bdevs_operational": 3, 00:09:23.357 "base_bdevs_list": [ 00:09:23.357 { 00:09:23.357 "name": "BaseBdev1", 00:09:23.357 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:23.357 "is_configured": true, 00:09:23.357 "data_offset": 0, 00:09:23.357 "data_size": 65536 00:09:23.357 }, 00:09:23.357 { 00:09:23.357 "name": null, 00:09:23.357 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:23.357 "is_configured": false, 00:09:23.357 "data_offset": 0, 00:09:23.357 "data_size": 65536 00:09:23.357 }, 00:09:23.357 { 00:09:23.357 "name": "BaseBdev3", 00:09:23.357 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:23.357 "is_configured": true, 00:09:23.357 "data_offset": 0, 00:09:23.357 "data_size": 65536 00:09:23.357 } 00:09:23.357 ] 00:09:23.357 }' 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.357 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.619 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.620 [2024-12-12 16:05:49.913666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.620 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.879 "name": "Existed_Raid", 00:09:23.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.879 "strip_size_kb": 64, 00:09:23.879 "state": "configuring", 00:09:23.879 "raid_level": "concat", 00:09:23.879 "superblock": false, 00:09:23.879 "num_base_bdevs": 3, 00:09:23.879 "num_base_bdevs_discovered": 1, 00:09:23.879 "num_base_bdevs_operational": 3, 00:09:23.879 "base_bdevs_list": [ 00:09:23.879 { 00:09:23.879 "name": "BaseBdev1", 00:09:23.879 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:23.879 "is_configured": true, 00:09:23.879 "data_offset": 0, 00:09:23.879 "data_size": 65536 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "name": null, 00:09:23.879 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:23.879 "is_configured": false, 00:09:23.879 "data_offset": 0, 00:09:23.879 "data_size": 65536 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "name": null, 00:09:23.879 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:23.879 "is_configured": false, 00:09:23.879 "data_offset": 0, 00:09:23.879 "data_size": 65536 00:09:23.879 } 00:09:23.879 ] 00:09:23.879 }' 00:09:23.879 16:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.879 16:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 [2024-12-12 16:05:50.376916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.140 "name": "Existed_Raid", 00:09:24.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.140 "strip_size_kb": 64, 00:09:24.140 "state": "configuring", 00:09:24.140 "raid_level": "concat", 00:09:24.140 "superblock": false, 00:09:24.140 "num_base_bdevs": 3, 00:09:24.140 "num_base_bdevs_discovered": 2, 00:09:24.140 "num_base_bdevs_operational": 3, 00:09:24.140 "base_bdevs_list": [ 00:09:24.140 { 00:09:24.140 "name": "BaseBdev1", 00:09:24.140 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:24.140 "is_configured": true, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 65536 00:09:24.140 }, 00:09:24.140 { 00:09:24.140 "name": null, 00:09:24.140 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:24.140 "is_configured": false, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 65536 00:09:24.140 }, 00:09:24.140 { 00:09:24.140 "name": "BaseBdev3", 00:09:24.140 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:24.140 "is_configured": true, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 65536 00:09:24.140 } 00:09:24.140 ] 00:09:24.140 }' 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.140 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.709 [2024-12-12 16:05:50.856083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.709 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.710 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.710 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.710 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.710 16:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.710 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.710 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.710 16:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.710 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.710 "name": "Existed_Raid", 00:09:24.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.710 "strip_size_kb": 64, 00:09:24.710 "state": "configuring", 00:09:24.710 "raid_level": "concat", 00:09:24.710 "superblock": false, 00:09:24.710 "num_base_bdevs": 3, 00:09:24.710 "num_base_bdevs_discovered": 1, 00:09:24.710 "num_base_bdevs_operational": 3, 00:09:24.710 "base_bdevs_list": [ 00:09:24.710 { 00:09:24.710 "name": null, 00:09:24.710 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:24.710 "is_configured": false, 00:09:24.710 "data_offset": 0, 00:09:24.710 "data_size": 65536 00:09:24.710 }, 00:09:24.710 { 00:09:24.710 "name": null, 00:09:24.710 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:24.710 "is_configured": false, 00:09:24.710 "data_offset": 0, 00:09:24.710 "data_size": 65536 00:09:24.710 }, 00:09:24.710 { 00:09:24.710 "name": "BaseBdev3", 00:09:24.710 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:24.710 "is_configured": true, 00:09:24.710 "data_offset": 0, 00:09:24.710 "data_size": 65536 00:09:24.710 } 00:09:24.710 ] 00:09:24.710 }' 00:09:24.710 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.710 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.279 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.279 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 [2024-12-12 16:05:51.442094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.280 "name": "Existed_Raid", 00:09:25.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.280 "strip_size_kb": 64, 00:09:25.280 "state": "configuring", 00:09:25.280 "raid_level": "concat", 00:09:25.280 "superblock": false, 00:09:25.280 "num_base_bdevs": 3, 00:09:25.280 "num_base_bdevs_discovered": 2, 00:09:25.280 "num_base_bdevs_operational": 3, 00:09:25.280 "base_bdevs_list": [ 00:09:25.280 { 00:09:25.280 "name": null, 00:09:25.280 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:25.280 "is_configured": false, 00:09:25.280 "data_offset": 0, 00:09:25.280 "data_size": 65536 00:09:25.280 }, 00:09:25.280 { 00:09:25.280 "name": "BaseBdev2", 00:09:25.280 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:25.280 "is_configured": true, 00:09:25.280 "data_offset": 0, 00:09:25.280 "data_size": 65536 00:09:25.280 }, 00:09:25.280 { 00:09:25.280 "name": "BaseBdev3", 00:09:25.280 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:25.280 "is_configured": true, 00:09:25.280 "data_offset": 0, 00:09:25.280 "data_size": 65536 00:09:25.280 } 00:09:25.280 ] 00:09:25.280 }' 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.280 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.540 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.540 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.540 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.540 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 232f7409-db63-46bd-aef1-99a512cdf3a5 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.800 16:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.800 [2024-12-12 16:05:52.012355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:25.800 [2024-12-12 16:05:52.012491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:25.800 [2024-12-12 16:05:52.012520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:25.800 [2024-12-12 16:05:52.012829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:25.800 [2024-12-12 16:05:52.013078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:25.800 [2024-12-12 16:05:52.013121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:25.800 [2024-12-12 16:05:52.013413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.800 NewBaseBdev 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.800 [ 00:09:25.800 { 00:09:25.800 "name": "NewBaseBdev", 00:09:25.800 "aliases": [ 00:09:25.800 "232f7409-db63-46bd-aef1-99a512cdf3a5" 00:09:25.800 ], 00:09:25.800 "product_name": "Malloc disk", 00:09:25.800 "block_size": 512, 00:09:25.800 "num_blocks": 65536, 00:09:25.800 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:25.800 "assigned_rate_limits": { 00:09:25.800 "rw_ios_per_sec": 0, 00:09:25.800 "rw_mbytes_per_sec": 0, 00:09:25.800 "r_mbytes_per_sec": 0, 00:09:25.800 "w_mbytes_per_sec": 0 00:09:25.800 }, 00:09:25.800 "claimed": true, 00:09:25.800 "claim_type": "exclusive_write", 00:09:25.800 "zoned": false, 00:09:25.800 "supported_io_types": { 00:09:25.800 "read": true, 00:09:25.800 "write": true, 00:09:25.800 "unmap": true, 00:09:25.800 "flush": true, 00:09:25.800 "reset": true, 00:09:25.800 "nvme_admin": false, 00:09:25.800 "nvme_io": false, 00:09:25.800 "nvme_io_md": false, 00:09:25.800 "write_zeroes": true, 00:09:25.800 "zcopy": true, 00:09:25.800 "get_zone_info": false, 00:09:25.800 "zone_management": false, 00:09:25.800 "zone_append": false, 00:09:25.800 "compare": false, 00:09:25.800 "compare_and_write": false, 00:09:25.800 "abort": true, 00:09:25.800 "seek_hole": false, 00:09:25.800 "seek_data": false, 00:09:25.800 "copy": true, 00:09:25.800 "nvme_iov_md": false 00:09:25.800 }, 00:09:25.800 "memory_domains": [ 00:09:25.800 { 00:09:25.800 "dma_device_id": "system", 00:09:25.800 "dma_device_type": 1 00:09:25.800 }, 00:09:25.800 { 00:09:25.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.800 "dma_device_type": 2 00:09:25.800 } 00:09:25.800 ], 00:09:25.800 "driver_specific": {} 00:09:25.800 } 00:09:25.800 ] 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.800 "name": "Existed_Raid", 00:09:25.800 "uuid": "a98fb0d5-c398-4133-9b2e-ad14a9a37485", 00:09:25.800 "strip_size_kb": 64, 00:09:25.800 "state": "online", 00:09:25.800 "raid_level": "concat", 00:09:25.800 "superblock": false, 00:09:25.800 "num_base_bdevs": 3, 00:09:25.800 "num_base_bdevs_discovered": 3, 00:09:25.800 "num_base_bdevs_operational": 3, 00:09:25.800 "base_bdevs_list": [ 00:09:25.800 { 00:09:25.800 "name": "NewBaseBdev", 00:09:25.800 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:25.800 "is_configured": true, 00:09:25.800 "data_offset": 0, 00:09:25.800 "data_size": 65536 00:09:25.800 }, 00:09:25.800 { 00:09:25.800 "name": "BaseBdev2", 00:09:25.800 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:25.800 "is_configured": true, 00:09:25.800 "data_offset": 0, 00:09:25.800 "data_size": 65536 00:09:25.800 }, 00:09:25.800 { 00:09:25.800 "name": "BaseBdev3", 00:09:25.800 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:25.800 "is_configured": true, 00:09:25.800 "data_offset": 0, 00:09:25.800 "data_size": 65536 00:09:25.800 } 00:09:25.800 ] 00:09:25.800 }' 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.800 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.370 [2024-12-12 16:05:52.500037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.370 "name": "Existed_Raid", 00:09:26.370 "aliases": [ 00:09:26.370 "a98fb0d5-c398-4133-9b2e-ad14a9a37485" 00:09:26.370 ], 00:09:26.370 "product_name": "Raid Volume", 00:09:26.370 "block_size": 512, 00:09:26.370 "num_blocks": 196608, 00:09:26.370 "uuid": "a98fb0d5-c398-4133-9b2e-ad14a9a37485", 00:09:26.370 "assigned_rate_limits": { 00:09:26.370 "rw_ios_per_sec": 0, 00:09:26.370 "rw_mbytes_per_sec": 0, 00:09:26.370 "r_mbytes_per_sec": 0, 00:09:26.370 "w_mbytes_per_sec": 0 00:09:26.370 }, 00:09:26.370 "claimed": false, 00:09:26.370 "zoned": false, 00:09:26.370 "supported_io_types": { 00:09:26.370 "read": true, 00:09:26.370 "write": true, 00:09:26.370 "unmap": true, 00:09:26.370 "flush": true, 00:09:26.370 "reset": true, 00:09:26.370 "nvme_admin": false, 00:09:26.370 "nvme_io": false, 00:09:26.370 "nvme_io_md": false, 00:09:26.370 "write_zeroes": true, 00:09:26.370 "zcopy": false, 00:09:26.370 "get_zone_info": false, 00:09:26.370 "zone_management": false, 00:09:26.370 "zone_append": false, 00:09:26.370 "compare": false, 00:09:26.370 "compare_and_write": false, 00:09:26.370 "abort": false, 00:09:26.370 "seek_hole": false, 00:09:26.370 "seek_data": false, 00:09:26.370 "copy": false, 00:09:26.370 "nvme_iov_md": false 00:09:26.370 }, 00:09:26.370 "memory_domains": [ 00:09:26.370 { 00:09:26.370 "dma_device_id": "system", 00:09:26.370 "dma_device_type": 1 00:09:26.370 }, 00:09:26.370 { 00:09:26.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.370 "dma_device_type": 2 00:09:26.370 }, 00:09:26.370 { 00:09:26.370 "dma_device_id": "system", 00:09:26.370 "dma_device_type": 1 00:09:26.370 }, 00:09:26.370 { 00:09:26.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.370 "dma_device_type": 2 00:09:26.370 }, 00:09:26.370 { 00:09:26.370 "dma_device_id": "system", 00:09:26.370 "dma_device_type": 1 00:09:26.370 }, 00:09:26.370 { 00:09:26.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.370 "dma_device_type": 2 00:09:26.370 } 00:09:26.370 ], 00:09:26.370 "driver_specific": { 00:09:26.370 "raid": { 00:09:26.370 "uuid": "a98fb0d5-c398-4133-9b2e-ad14a9a37485", 00:09:26.370 "strip_size_kb": 64, 00:09:26.370 "state": "online", 00:09:26.370 "raid_level": "concat", 00:09:26.370 "superblock": false, 00:09:26.370 "num_base_bdevs": 3, 00:09:26.370 "num_base_bdevs_discovered": 3, 00:09:26.370 "num_base_bdevs_operational": 3, 00:09:26.370 "base_bdevs_list": [ 00:09:26.370 { 00:09:26.370 "name": "NewBaseBdev", 00:09:26.370 "uuid": "232f7409-db63-46bd-aef1-99a512cdf3a5", 00:09:26.370 "is_configured": true, 00:09:26.370 "data_offset": 0, 00:09:26.370 "data_size": 65536 00:09:26.370 }, 00:09:26.370 { 00:09:26.370 "name": "BaseBdev2", 00:09:26.370 "uuid": "13d6aac6-118d-4cfe-9e36-fc7e994b47d0", 00:09:26.370 "is_configured": true, 00:09:26.370 "data_offset": 0, 00:09:26.370 "data_size": 65536 00:09:26.370 }, 00:09:26.370 { 00:09:26.370 "name": "BaseBdev3", 00:09:26.370 "uuid": "0a738b48-1d0e-4815-a86f-80d625703d5c", 00:09:26.370 "is_configured": true, 00:09:26.370 "data_offset": 0, 00:09:26.370 "data_size": 65536 00:09:26.370 } 00:09:26.370 ] 00:09:26.370 } 00:09:26.370 } 00:09:26.370 }' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:26.370 BaseBdev2 00:09:26.370 BaseBdev3' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.370 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.630 [2024-12-12 16:05:52.779163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.630 [2024-12-12 16:05:52.779212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.630 [2024-12-12 16:05:52.779310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.630 [2024-12-12 16:05:52.779377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.630 [2024-12-12 16:05:52.779391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67632 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67632 ']' 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67632 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67632 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67632' 00:09:26.630 killing process with pid 67632 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67632 00:09:26.630 [2024-12-12 16:05:52.819482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.630 16:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67632 00:09:26.889 [2024-12-12 16:05:53.152936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:28.269 00:09:28.269 real 0m10.606s 00:09:28.269 user 0m16.513s 00:09:28.269 sys 0m1.896s 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.269 ************************************ 00:09:28.269 END TEST raid_state_function_test 00:09:28.269 ************************************ 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.269 16:05:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:28.269 16:05:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.269 16:05:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.269 16:05:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.269 ************************************ 00:09:28.269 START TEST raid_state_function_test_sb 00:09:28.269 ************************************ 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68255 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68255' 00:09:28.269 Process raid pid: 68255 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68255 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68255 ']' 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.269 16:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.269 [2024-12-12 16:05:54.567115] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:28.269 [2024-12-12 16:05:54.567241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.529 [2024-12-12 16:05:54.744006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.787 [2024-12-12 16:05:54.889801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.787 [2024-12-12 16:05:55.132023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.787 [2024-12-12 16:05:55.132073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.353 [2024-12-12 16:05:55.477556] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.353 [2024-12-12 16:05:55.477627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.353 [2024-12-12 16:05:55.477639] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.353 [2024-12-12 16:05:55.477650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.353 [2024-12-12 16:05:55.477656] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.353 [2024-12-12 16:05:55.477666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.353 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.354 "name": "Existed_Raid", 00:09:29.354 "uuid": "70ae314c-82a8-4721-9523-2be71ab2e964", 00:09:29.354 "strip_size_kb": 64, 00:09:29.354 "state": "configuring", 00:09:29.354 "raid_level": "concat", 00:09:29.354 "superblock": true, 00:09:29.354 "num_base_bdevs": 3, 00:09:29.354 "num_base_bdevs_discovered": 0, 00:09:29.354 "num_base_bdevs_operational": 3, 00:09:29.354 "base_bdevs_list": [ 00:09:29.354 { 00:09:29.354 "name": "BaseBdev1", 00:09:29.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.354 "is_configured": false, 00:09:29.354 "data_offset": 0, 00:09:29.354 "data_size": 0 00:09:29.354 }, 00:09:29.354 { 00:09:29.354 "name": "BaseBdev2", 00:09:29.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.354 "is_configured": false, 00:09:29.354 "data_offset": 0, 00:09:29.354 "data_size": 0 00:09:29.354 }, 00:09:29.354 { 00:09:29.354 "name": "BaseBdev3", 00:09:29.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.354 "is_configured": false, 00:09:29.354 "data_offset": 0, 00:09:29.354 "data_size": 0 00:09:29.354 } 00:09:29.354 ] 00:09:29.354 }' 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.354 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.613 [2024-12-12 16:05:55.944750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.613 [2024-12-12 16:05:55.944908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.613 [2024-12-12 16:05:55.956697] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.613 [2024-12-12 16:05:55.956789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.613 [2024-12-12 16:05:55.956820] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.613 [2024-12-12 16:05:55.956845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.613 [2024-12-12 16:05:55.956922] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.613 [2024-12-12 16:05:55.956958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.613 16:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.872 [2024-12-12 16:05:56.010667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.873 BaseBdev1 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.873 [ 00:09:29.873 { 00:09:29.873 "name": "BaseBdev1", 00:09:29.873 "aliases": [ 00:09:29.873 "99c05551-5166-4812-9bdc-75540e47d670" 00:09:29.873 ], 00:09:29.873 "product_name": "Malloc disk", 00:09:29.873 "block_size": 512, 00:09:29.873 "num_blocks": 65536, 00:09:29.873 "uuid": "99c05551-5166-4812-9bdc-75540e47d670", 00:09:29.873 "assigned_rate_limits": { 00:09:29.873 "rw_ios_per_sec": 0, 00:09:29.873 "rw_mbytes_per_sec": 0, 00:09:29.873 "r_mbytes_per_sec": 0, 00:09:29.873 "w_mbytes_per_sec": 0 00:09:29.873 }, 00:09:29.873 "claimed": true, 00:09:29.873 "claim_type": "exclusive_write", 00:09:29.873 "zoned": false, 00:09:29.873 "supported_io_types": { 00:09:29.873 "read": true, 00:09:29.873 "write": true, 00:09:29.873 "unmap": true, 00:09:29.873 "flush": true, 00:09:29.873 "reset": true, 00:09:29.873 "nvme_admin": false, 00:09:29.873 "nvme_io": false, 00:09:29.873 "nvme_io_md": false, 00:09:29.873 "write_zeroes": true, 00:09:29.873 "zcopy": true, 00:09:29.873 "get_zone_info": false, 00:09:29.873 "zone_management": false, 00:09:29.873 "zone_append": false, 00:09:29.873 "compare": false, 00:09:29.873 "compare_and_write": false, 00:09:29.873 "abort": true, 00:09:29.873 "seek_hole": false, 00:09:29.873 "seek_data": false, 00:09:29.873 "copy": true, 00:09:29.873 "nvme_iov_md": false 00:09:29.873 }, 00:09:29.873 "memory_domains": [ 00:09:29.873 { 00:09:29.873 "dma_device_id": "system", 00:09:29.873 "dma_device_type": 1 00:09:29.873 }, 00:09:29.873 { 00:09:29.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.873 "dma_device_type": 2 00:09:29.873 } 00:09:29.873 ], 00:09:29.873 "driver_specific": {} 00:09:29.873 } 00:09:29.873 ] 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.873 "name": "Existed_Raid", 00:09:29.873 "uuid": "e9e526d6-da09-4da9-982c-beb743cebd31", 00:09:29.873 "strip_size_kb": 64, 00:09:29.873 "state": "configuring", 00:09:29.873 "raid_level": "concat", 00:09:29.873 "superblock": true, 00:09:29.873 "num_base_bdevs": 3, 00:09:29.873 "num_base_bdevs_discovered": 1, 00:09:29.873 "num_base_bdevs_operational": 3, 00:09:29.873 "base_bdevs_list": [ 00:09:29.873 { 00:09:29.873 "name": "BaseBdev1", 00:09:29.873 "uuid": "99c05551-5166-4812-9bdc-75540e47d670", 00:09:29.873 "is_configured": true, 00:09:29.873 "data_offset": 2048, 00:09:29.873 "data_size": 63488 00:09:29.873 }, 00:09:29.873 { 00:09:29.873 "name": "BaseBdev2", 00:09:29.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.873 "is_configured": false, 00:09:29.873 "data_offset": 0, 00:09:29.873 "data_size": 0 00:09:29.873 }, 00:09:29.873 { 00:09:29.873 "name": "BaseBdev3", 00:09:29.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.873 "is_configured": false, 00:09:29.873 "data_offset": 0, 00:09:29.873 "data_size": 0 00:09:29.873 } 00:09:29.873 ] 00:09:29.873 }' 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.873 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.133 [2024-12-12 16:05:56.461992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.133 [2024-12-12 16:05:56.462148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.133 [2024-12-12 16:05:56.474029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.133 [2024-12-12 16:05:56.476322] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.133 [2024-12-12 16:05:56.476408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.133 [2024-12-12 16:05:56.476453] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.133 [2024-12-12 16:05:56.476478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.133 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.392 "name": "Existed_Raid", 00:09:30.392 "uuid": "a5709558-3e5b-4cfb-83c9-57830dd78378", 00:09:30.392 "strip_size_kb": 64, 00:09:30.392 "state": "configuring", 00:09:30.392 "raid_level": "concat", 00:09:30.392 "superblock": true, 00:09:30.392 "num_base_bdevs": 3, 00:09:30.392 "num_base_bdevs_discovered": 1, 00:09:30.392 "num_base_bdevs_operational": 3, 00:09:30.392 "base_bdevs_list": [ 00:09:30.392 { 00:09:30.392 "name": "BaseBdev1", 00:09:30.392 "uuid": "99c05551-5166-4812-9bdc-75540e47d670", 00:09:30.392 "is_configured": true, 00:09:30.392 "data_offset": 2048, 00:09:30.392 "data_size": 63488 00:09:30.392 }, 00:09:30.392 { 00:09:30.392 "name": "BaseBdev2", 00:09:30.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.392 "is_configured": false, 00:09:30.392 "data_offset": 0, 00:09:30.392 "data_size": 0 00:09:30.392 }, 00:09:30.392 { 00:09:30.392 "name": "BaseBdev3", 00:09:30.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.392 "is_configured": false, 00:09:30.392 "data_offset": 0, 00:09:30.392 "data_size": 0 00:09:30.392 } 00:09:30.392 ] 00:09:30.392 }' 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.392 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.653 [2024-12-12 16:05:56.920106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.653 BaseBdev2 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.653 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.653 [ 00:09:30.653 { 00:09:30.654 "name": "BaseBdev2", 00:09:30.654 "aliases": [ 00:09:30.654 "fa5fb506-f253-434d-afb6-6debea43b816" 00:09:30.654 ], 00:09:30.654 "product_name": "Malloc disk", 00:09:30.654 "block_size": 512, 00:09:30.654 "num_blocks": 65536, 00:09:30.654 "uuid": "fa5fb506-f253-434d-afb6-6debea43b816", 00:09:30.654 "assigned_rate_limits": { 00:09:30.654 "rw_ios_per_sec": 0, 00:09:30.654 "rw_mbytes_per_sec": 0, 00:09:30.654 "r_mbytes_per_sec": 0, 00:09:30.654 "w_mbytes_per_sec": 0 00:09:30.654 }, 00:09:30.654 "claimed": true, 00:09:30.654 "claim_type": "exclusive_write", 00:09:30.654 "zoned": false, 00:09:30.654 "supported_io_types": { 00:09:30.654 "read": true, 00:09:30.654 "write": true, 00:09:30.654 "unmap": true, 00:09:30.654 "flush": true, 00:09:30.654 "reset": true, 00:09:30.654 "nvme_admin": false, 00:09:30.654 "nvme_io": false, 00:09:30.654 "nvme_io_md": false, 00:09:30.654 "write_zeroes": true, 00:09:30.654 "zcopy": true, 00:09:30.654 "get_zone_info": false, 00:09:30.654 "zone_management": false, 00:09:30.654 "zone_append": false, 00:09:30.654 "compare": false, 00:09:30.654 "compare_and_write": false, 00:09:30.654 "abort": true, 00:09:30.654 "seek_hole": false, 00:09:30.654 "seek_data": false, 00:09:30.654 "copy": true, 00:09:30.654 "nvme_iov_md": false 00:09:30.654 }, 00:09:30.654 "memory_domains": [ 00:09:30.654 { 00:09:30.654 "dma_device_id": "system", 00:09:30.654 "dma_device_type": 1 00:09:30.654 }, 00:09:30.654 { 00:09:30.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.654 "dma_device_type": 2 00:09:30.654 } 00:09:30.654 ], 00:09:30.654 "driver_specific": {} 00:09:30.654 } 00:09:30.654 ] 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 16:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.916 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.916 "name": "Existed_Raid", 00:09:30.916 "uuid": "a5709558-3e5b-4cfb-83c9-57830dd78378", 00:09:30.916 "strip_size_kb": 64, 00:09:30.916 "state": "configuring", 00:09:30.916 "raid_level": "concat", 00:09:30.916 "superblock": true, 00:09:30.916 "num_base_bdevs": 3, 00:09:30.916 "num_base_bdevs_discovered": 2, 00:09:30.916 "num_base_bdevs_operational": 3, 00:09:30.916 "base_bdevs_list": [ 00:09:30.916 { 00:09:30.916 "name": "BaseBdev1", 00:09:30.916 "uuid": "99c05551-5166-4812-9bdc-75540e47d670", 00:09:30.916 "is_configured": true, 00:09:30.916 "data_offset": 2048, 00:09:30.916 "data_size": 63488 00:09:30.916 }, 00:09:30.916 { 00:09:30.916 "name": "BaseBdev2", 00:09:30.916 "uuid": "fa5fb506-f253-434d-afb6-6debea43b816", 00:09:30.916 "is_configured": true, 00:09:30.916 "data_offset": 2048, 00:09:30.916 "data_size": 63488 00:09:30.916 }, 00:09:30.916 { 00:09:30.916 "name": "BaseBdev3", 00:09:30.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.916 "is_configured": false, 00:09:30.916 "data_offset": 0, 00:09:30.916 "data_size": 0 00:09:30.916 } 00:09:30.916 ] 00:09:30.916 }' 00:09:30.916 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.916 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 [2024-12-12 16:05:57.398132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.178 [2024-12-12 16:05:57.398542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.178 [2024-12-12 16:05:57.398572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.178 [2024-12-12 16:05:57.398872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:31.178 BaseBdev3 00:09:31.178 [2024-12-12 16:05:57.399074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.178 [2024-12-12 16:05:57.399092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:31.178 [2024-12-12 16:05:57.399266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 [ 00:09:31.178 { 00:09:31.178 "name": "BaseBdev3", 00:09:31.178 "aliases": [ 00:09:31.178 "7f6ee725-7cf1-4f3c-8d8c-afbfd0974201" 00:09:31.178 ], 00:09:31.178 "product_name": "Malloc disk", 00:09:31.178 "block_size": 512, 00:09:31.178 "num_blocks": 65536, 00:09:31.178 "uuid": "7f6ee725-7cf1-4f3c-8d8c-afbfd0974201", 00:09:31.178 "assigned_rate_limits": { 00:09:31.178 "rw_ios_per_sec": 0, 00:09:31.178 "rw_mbytes_per_sec": 0, 00:09:31.178 "r_mbytes_per_sec": 0, 00:09:31.178 "w_mbytes_per_sec": 0 00:09:31.178 }, 00:09:31.178 "claimed": true, 00:09:31.178 "claim_type": "exclusive_write", 00:09:31.178 "zoned": false, 00:09:31.178 "supported_io_types": { 00:09:31.178 "read": true, 00:09:31.178 "write": true, 00:09:31.178 "unmap": true, 00:09:31.178 "flush": true, 00:09:31.178 "reset": true, 00:09:31.178 "nvme_admin": false, 00:09:31.178 "nvme_io": false, 00:09:31.178 "nvme_io_md": false, 00:09:31.178 "write_zeroes": true, 00:09:31.178 "zcopy": true, 00:09:31.178 "get_zone_info": false, 00:09:31.178 "zone_management": false, 00:09:31.178 "zone_append": false, 00:09:31.178 "compare": false, 00:09:31.178 "compare_and_write": false, 00:09:31.178 "abort": true, 00:09:31.178 "seek_hole": false, 00:09:31.178 "seek_data": false, 00:09:31.178 "copy": true, 00:09:31.178 "nvme_iov_md": false 00:09:31.178 }, 00:09:31.178 "memory_domains": [ 00:09:31.178 { 00:09:31.178 "dma_device_id": "system", 00:09:31.178 "dma_device_type": 1 00:09:31.178 }, 00:09:31.178 { 00:09:31.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.178 "dma_device_type": 2 00:09:31.178 } 00:09:31.178 ], 00:09:31.178 "driver_specific": {} 00:09:31.178 } 00:09:31.178 ] 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.178 "name": "Existed_Raid", 00:09:31.178 "uuid": "a5709558-3e5b-4cfb-83c9-57830dd78378", 00:09:31.178 "strip_size_kb": 64, 00:09:31.178 "state": "online", 00:09:31.178 "raid_level": "concat", 00:09:31.178 "superblock": true, 00:09:31.178 "num_base_bdevs": 3, 00:09:31.178 "num_base_bdevs_discovered": 3, 00:09:31.178 "num_base_bdevs_operational": 3, 00:09:31.178 "base_bdevs_list": [ 00:09:31.178 { 00:09:31.178 "name": "BaseBdev1", 00:09:31.178 "uuid": "99c05551-5166-4812-9bdc-75540e47d670", 00:09:31.178 "is_configured": true, 00:09:31.178 "data_offset": 2048, 00:09:31.178 "data_size": 63488 00:09:31.178 }, 00:09:31.178 { 00:09:31.178 "name": "BaseBdev2", 00:09:31.178 "uuid": "fa5fb506-f253-434d-afb6-6debea43b816", 00:09:31.178 "is_configured": true, 00:09:31.178 "data_offset": 2048, 00:09:31.178 "data_size": 63488 00:09:31.178 }, 00:09:31.178 { 00:09:31.178 "name": "BaseBdev3", 00:09:31.178 "uuid": "7f6ee725-7cf1-4f3c-8d8c-afbfd0974201", 00:09:31.178 "is_configured": true, 00:09:31.178 "data_offset": 2048, 00:09:31.178 "data_size": 63488 00:09:31.178 } 00:09:31.178 ] 00:09:31.178 }' 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.178 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.747 [2024-12-12 16:05:57.825841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.747 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.747 "name": "Existed_Raid", 00:09:31.747 "aliases": [ 00:09:31.747 "a5709558-3e5b-4cfb-83c9-57830dd78378" 00:09:31.747 ], 00:09:31.747 "product_name": "Raid Volume", 00:09:31.747 "block_size": 512, 00:09:31.747 "num_blocks": 190464, 00:09:31.747 "uuid": "a5709558-3e5b-4cfb-83c9-57830dd78378", 00:09:31.748 "assigned_rate_limits": { 00:09:31.748 "rw_ios_per_sec": 0, 00:09:31.748 "rw_mbytes_per_sec": 0, 00:09:31.748 "r_mbytes_per_sec": 0, 00:09:31.748 "w_mbytes_per_sec": 0 00:09:31.748 }, 00:09:31.748 "claimed": false, 00:09:31.748 "zoned": false, 00:09:31.748 "supported_io_types": { 00:09:31.748 "read": true, 00:09:31.748 "write": true, 00:09:31.748 "unmap": true, 00:09:31.748 "flush": true, 00:09:31.748 "reset": true, 00:09:31.748 "nvme_admin": false, 00:09:31.748 "nvme_io": false, 00:09:31.748 "nvme_io_md": false, 00:09:31.748 "write_zeroes": true, 00:09:31.748 "zcopy": false, 00:09:31.748 "get_zone_info": false, 00:09:31.748 "zone_management": false, 00:09:31.748 "zone_append": false, 00:09:31.748 "compare": false, 00:09:31.748 "compare_and_write": false, 00:09:31.748 "abort": false, 00:09:31.748 "seek_hole": false, 00:09:31.748 "seek_data": false, 00:09:31.748 "copy": false, 00:09:31.748 "nvme_iov_md": false 00:09:31.748 }, 00:09:31.748 "memory_domains": [ 00:09:31.748 { 00:09:31.748 "dma_device_id": "system", 00:09:31.748 "dma_device_type": 1 00:09:31.748 }, 00:09:31.748 { 00:09:31.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.748 "dma_device_type": 2 00:09:31.748 }, 00:09:31.748 { 00:09:31.748 "dma_device_id": "system", 00:09:31.748 "dma_device_type": 1 00:09:31.748 }, 00:09:31.748 { 00:09:31.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.748 "dma_device_type": 2 00:09:31.748 }, 00:09:31.748 { 00:09:31.748 "dma_device_id": "system", 00:09:31.748 "dma_device_type": 1 00:09:31.748 }, 00:09:31.748 { 00:09:31.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.748 "dma_device_type": 2 00:09:31.748 } 00:09:31.748 ], 00:09:31.748 "driver_specific": { 00:09:31.748 "raid": { 00:09:31.748 "uuid": "a5709558-3e5b-4cfb-83c9-57830dd78378", 00:09:31.748 "strip_size_kb": 64, 00:09:31.748 "state": "online", 00:09:31.748 "raid_level": "concat", 00:09:31.748 "superblock": true, 00:09:31.748 "num_base_bdevs": 3, 00:09:31.748 "num_base_bdevs_discovered": 3, 00:09:31.748 "num_base_bdevs_operational": 3, 00:09:31.748 "base_bdevs_list": [ 00:09:31.748 { 00:09:31.748 "name": "BaseBdev1", 00:09:31.748 "uuid": "99c05551-5166-4812-9bdc-75540e47d670", 00:09:31.748 "is_configured": true, 00:09:31.748 "data_offset": 2048, 00:09:31.748 "data_size": 63488 00:09:31.748 }, 00:09:31.748 { 00:09:31.748 "name": "BaseBdev2", 00:09:31.748 "uuid": "fa5fb506-f253-434d-afb6-6debea43b816", 00:09:31.748 "is_configured": true, 00:09:31.748 "data_offset": 2048, 00:09:31.748 "data_size": 63488 00:09:31.748 }, 00:09:31.748 { 00:09:31.748 "name": "BaseBdev3", 00:09:31.748 "uuid": "7f6ee725-7cf1-4f3c-8d8c-afbfd0974201", 00:09:31.748 "is_configured": true, 00:09:31.748 "data_offset": 2048, 00:09:31.748 "data_size": 63488 00:09:31.748 } 00:09:31.748 ] 00:09:31.748 } 00:09:31.748 } 00:09:31.748 }' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:31.748 BaseBdev2 00:09:31.748 BaseBdev3' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.748 16:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.748 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.748 [2024-12-12 16:05:58.061081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.748 [2024-12-12 16:05:58.061117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.748 [2024-12-12 16:05:58.061176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.008 "name": "Existed_Raid", 00:09:32.008 "uuid": "a5709558-3e5b-4cfb-83c9-57830dd78378", 00:09:32.008 "strip_size_kb": 64, 00:09:32.008 "state": "offline", 00:09:32.008 "raid_level": "concat", 00:09:32.008 "superblock": true, 00:09:32.008 "num_base_bdevs": 3, 00:09:32.008 "num_base_bdevs_discovered": 2, 00:09:32.008 "num_base_bdevs_operational": 2, 00:09:32.008 "base_bdevs_list": [ 00:09:32.008 { 00:09:32.008 "name": null, 00:09:32.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.008 "is_configured": false, 00:09:32.008 "data_offset": 0, 00:09:32.008 "data_size": 63488 00:09:32.008 }, 00:09:32.008 { 00:09:32.008 "name": "BaseBdev2", 00:09:32.008 "uuid": "fa5fb506-f253-434d-afb6-6debea43b816", 00:09:32.008 "is_configured": true, 00:09:32.008 "data_offset": 2048, 00:09:32.008 "data_size": 63488 00:09:32.008 }, 00:09:32.008 { 00:09:32.008 "name": "BaseBdev3", 00:09:32.008 "uuid": "7f6ee725-7cf1-4f3c-8d8c-afbfd0974201", 00:09:32.008 "is_configured": true, 00:09:32.008 "data_offset": 2048, 00:09:32.008 "data_size": 63488 00:09:32.008 } 00:09:32.008 ] 00:09:32.008 }' 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.008 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.268 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.268 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.268 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.268 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.268 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.268 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.268 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.528 [2024-12-12 16:05:58.627547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.528 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.528 [2024-12-12 16:05:58.790794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.528 [2024-12-12 16:05:58.790929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:32.791 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.791 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 BaseBdev2 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.792 16:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 [ 00:09:32.792 { 00:09:32.792 "name": "BaseBdev2", 00:09:32.792 "aliases": [ 00:09:32.792 "adf9cbfc-75ae-4642-9073-c6ea2713b358" 00:09:32.792 ], 00:09:32.792 "product_name": "Malloc disk", 00:09:32.792 "block_size": 512, 00:09:32.792 "num_blocks": 65536, 00:09:32.792 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:32.792 "assigned_rate_limits": { 00:09:32.792 "rw_ios_per_sec": 0, 00:09:32.792 "rw_mbytes_per_sec": 0, 00:09:32.792 "r_mbytes_per_sec": 0, 00:09:32.792 "w_mbytes_per_sec": 0 00:09:32.792 }, 00:09:32.792 "claimed": false, 00:09:32.792 "zoned": false, 00:09:32.792 "supported_io_types": { 00:09:32.792 "read": true, 00:09:32.792 "write": true, 00:09:32.792 "unmap": true, 00:09:32.792 "flush": true, 00:09:32.792 "reset": true, 00:09:32.792 "nvme_admin": false, 00:09:32.792 "nvme_io": false, 00:09:32.792 "nvme_io_md": false, 00:09:32.792 "write_zeroes": true, 00:09:32.792 "zcopy": true, 00:09:32.792 "get_zone_info": false, 00:09:32.792 "zone_management": false, 00:09:32.792 "zone_append": false, 00:09:32.792 "compare": false, 00:09:32.792 "compare_and_write": false, 00:09:32.792 "abort": true, 00:09:32.792 "seek_hole": false, 00:09:32.792 "seek_data": false, 00:09:32.792 "copy": true, 00:09:32.792 "nvme_iov_md": false 00:09:32.792 }, 00:09:32.792 "memory_domains": [ 00:09:32.792 { 00:09:32.792 "dma_device_id": "system", 00:09:32.792 "dma_device_type": 1 00:09:32.792 }, 00:09:32.792 { 00:09:32.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.792 "dma_device_type": 2 00:09:32.792 } 00:09:32.792 ], 00:09:32.792 "driver_specific": {} 00:09:32.792 } 00:09:32.792 ] 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 BaseBdev3 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.793 [ 00:09:32.793 { 00:09:32.793 "name": "BaseBdev3", 00:09:32.793 "aliases": [ 00:09:32.793 "5567859a-78d2-42ee-b8f9-305bc58daa50" 00:09:32.793 ], 00:09:32.793 "product_name": "Malloc disk", 00:09:32.793 "block_size": 512, 00:09:32.793 "num_blocks": 65536, 00:09:32.793 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:32.793 "assigned_rate_limits": { 00:09:32.793 "rw_ios_per_sec": 0, 00:09:32.793 "rw_mbytes_per_sec": 0, 00:09:32.793 "r_mbytes_per_sec": 0, 00:09:32.793 "w_mbytes_per_sec": 0 00:09:32.793 }, 00:09:32.793 "claimed": false, 00:09:32.793 "zoned": false, 00:09:32.793 "supported_io_types": { 00:09:32.793 "read": true, 00:09:32.793 "write": true, 00:09:32.793 "unmap": true, 00:09:32.793 "flush": true, 00:09:32.793 "reset": true, 00:09:32.793 "nvme_admin": false, 00:09:32.793 "nvme_io": false, 00:09:32.793 "nvme_io_md": false, 00:09:32.793 "write_zeroes": true, 00:09:32.793 "zcopy": true, 00:09:32.793 "get_zone_info": false, 00:09:32.793 "zone_management": false, 00:09:32.793 "zone_append": false, 00:09:32.793 "compare": false, 00:09:32.793 "compare_and_write": false, 00:09:32.793 "abort": true, 00:09:32.793 "seek_hole": false, 00:09:32.793 "seek_data": false, 00:09:32.793 "copy": true, 00:09:32.793 "nvme_iov_md": false 00:09:32.793 }, 00:09:32.793 "memory_domains": [ 00:09:32.793 { 00:09:32.793 "dma_device_id": "system", 00:09:32.793 "dma_device_type": 1 00:09:32.793 }, 00:09:32.793 { 00:09:32.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.793 "dma_device_type": 2 00:09:32.793 } 00:09:32.793 ], 00:09:32.793 "driver_specific": {} 00:09:32.793 } 00:09:32.793 ] 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.793 [2024-12-12 16:05:59.130965] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.793 [2024-12-12 16:05:59.131059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.793 [2024-12-12 16:05:59.131103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.793 [2024-12-12 16:05:59.133206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.793 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.057 "name": "Existed_Raid", 00:09:33.057 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:33.057 "strip_size_kb": 64, 00:09:33.057 "state": "configuring", 00:09:33.057 "raid_level": "concat", 00:09:33.057 "superblock": true, 00:09:33.057 "num_base_bdevs": 3, 00:09:33.057 "num_base_bdevs_discovered": 2, 00:09:33.057 "num_base_bdevs_operational": 3, 00:09:33.057 "base_bdevs_list": [ 00:09:33.057 { 00:09:33.057 "name": "BaseBdev1", 00:09:33.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.057 "is_configured": false, 00:09:33.057 "data_offset": 0, 00:09:33.057 "data_size": 0 00:09:33.057 }, 00:09:33.057 { 00:09:33.057 "name": "BaseBdev2", 00:09:33.057 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:33.057 "is_configured": true, 00:09:33.057 "data_offset": 2048, 00:09:33.057 "data_size": 63488 00:09:33.057 }, 00:09:33.057 { 00:09:33.057 "name": "BaseBdev3", 00:09:33.057 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:33.057 "is_configured": true, 00:09:33.057 "data_offset": 2048, 00:09:33.057 "data_size": 63488 00:09:33.057 } 00:09:33.057 ] 00:09:33.057 }' 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.057 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.317 [2024-12-12 16:05:59.558241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.317 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.317 "name": "Existed_Raid", 00:09:33.317 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:33.317 "strip_size_kb": 64, 00:09:33.317 "state": "configuring", 00:09:33.317 "raid_level": "concat", 00:09:33.317 "superblock": true, 00:09:33.317 "num_base_bdevs": 3, 00:09:33.317 "num_base_bdevs_discovered": 1, 00:09:33.317 "num_base_bdevs_operational": 3, 00:09:33.317 "base_bdevs_list": [ 00:09:33.317 { 00:09:33.317 "name": "BaseBdev1", 00:09:33.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.317 "is_configured": false, 00:09:33.317 "data_offset": 0, 00:09:33.317 "data_size": 0 00:09:33.317 }, 00:09:33.317 { 00:09:33.317 "name": null, 00:09:33.317 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:33.317 "is_configured": false, 00:09:33.317 "data_offset": 0, 00:09:33.317 "data_size": 63488 00:09:33.317 }, 00:09:33.317 { 00:09:33.317 "name": "BaseBdev3", 00:09:33.317 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:33.317 "is_configured": true, 00:09:33.318 "data_offset": 2048, 00:09:33.318 "data_size": 63488 00:09:33.318 } 00:09:33.318 ] 00:09:33.318 }' 00:09:33.318 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.318 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.887 16:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.887 [2024-12-12 16:06:00.031635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.887 BaseBdev1 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.887 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.888 [ 00:09:33.888 { 00:09:33.888 "name": "BaseBdev1", 00:09:33.888 "aliases": [ 00:09:33.888 "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd" 00:09:33.888 ], 00:09:33.888 "product_name": "Malloc disk", 00:09:33.888 "block_size": 512, 00:09:33.888 "num_blocks": 65536, 00:09:33.888 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:33.888 "assigned_rate_limits": { 00:09:33.888 "rw_ios_per_sec": 0, 00:09:33.888 "rw_mbytes_per_sec": 0, 00:09:33.888 "r_mbytes_per_sec": 0, 00:09:33.888 "w_mbytes_per_sec": 0 00:09:33.888 }, 00:09:33.888 "claimed": true, 00:09:33.888 "claim_type": "exclusive_write", 00:09:33.888 "zoned": false, 00:09:33.888 "supported_io_types": { 00:09:33.888 "read": true, 00:09:33.888 "write": true, 00:09:33.888 "unmap": true, 00:09:33.888 "flush": true, 00:09:33.888 "reset": true, 00:09:33.888 "nvme_admin": false, 00:09:33.888 "nvme_io": false, 00:09:33.888 "nvme_io_md": false, 00:09:33.888 "write_zeroes": true, 00:09:33.888 "zcopy": true, 00:09:33.888 "get_zone_info": false, 00:09:33.888 "zone_management": false, 00:09:33.888 "zone_append": false, 00:09:33.888 "compare": false, 00:09:33.888 "compare_and_write": false, 00:09:33.888 "abort": true, 00:09:33.888 "seek_hole": false, 00:09:33.888 "seek_data": false, 00:09:33.888 "copy": true, 00:09:33.888 "nvme_iov_md": false 00:09:33.888 }, 00:09:33.888 "memory_domains": [ 00:09:33.888 { 00:09:33.888 "dma_device_id": "system", 00:09:33.888 "dma_device_type": 1 00:09:33.888 }, 00:09:33.888 { 00:09:33.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.888 "dma_device_type": 2 00:09:33.888 } 00:09:33.888 ], 00:09:33.888 "driver_specific": {} 00:09:33.888 } 00:09:33.888 ] 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.888 "name": "Existed_Raid", 00:09:33.888 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:33.888 "strip_size_kb": 64, 00:09:33.888 "state": "configuring", 00:09:33.888 "raid_level": "concat", 00:09:33.888 "superblock": true, 00:09:33.888 "num_base_bdevs": 3, 00:09:33.888 "num_base_bdevs_discovered": 2, 00:09:33.888 "num_base_bdevs_operational": 3, 00:09:33.888 "base_bdevs_list": [ 00:09:33.888 { 00:09:33.888 "name": "BaseBdev1", 00:09:33.888 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:33.888 "is_configured": true, 00:09:33.888 "data_offset": 2048, 00:09:33.888 "data_size": 63488 00:09:33.888 }, 00:09:33.888 { 00:09:33.888 "name": null, 00:09:33.888 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:33.888 "is_configured": false, 00:09:33.888 "data_offset": 0, 00:09:33.888 "data_size": 63488 00:09:33.888 }, 00:09:33.888 { 00:09:33.888 "name": "BaseBdev3", 00:09:33.888 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:33.888 "is_configured": true, 00:09:33.888 "data_offset": 2048, 00:09:33.888 "data_size": 63488 00:09:33.888 } 00:09:33.888 ] 00:09:33.888 }' 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.888 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.148 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.408 [2024-12-12 16:06:00.498894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.408 "name": "Existed_Raid", 00:09:34.408 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:34.408 "strip_size_kb": 64, 00:09:34.408 "state": "configuring", 00:09:34.408 "raid_level": "concat", 00:09:34.408 "superblock": true, 00:09:34.408 "num_base_bdevs": 3, 00:09:34.408 "num_base_bdevs_discovered": 1, 00:09:34.408 "num_base_bdevs_operational": 3, 00:09:34.408 "base_bdevs_list": [ 00:09:34.408 { 00:09:34.408 "name": "BaseBdev1", 00:09:34.408 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:34.408 "is_configured": true, 00:09:34.408 "data_offset": 2048, 00:09:34.408 "data_size": 63488 00:09:34.408 }, 00:09:34.408 { 00:09:34.408 "name": null, 00:09:34.408 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:34.408 "is_configured": false, 00:09:34.408 "data_offset": 0, 00:09:34.408 "data_size": 63488 00:09:34.408 }, 00:09:34.408 { 00:09:34.408 "name": null, 00:09:34.408 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:34.408 "is_configured": false, 00:09:34.408 "data_offset": 0, 00:09:34.408 "data_size": 63488 00:09:34.408 } 00:09:34.408 ] 00:09:34.408 }' 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.408 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.668 [2024-12-12 16:06:00.946135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.668 16:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.668 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.668 "name": "Existed_Raid", 00:09:34.668 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:34.668 "strip_size_kb": 64, 00:09:34.668 "state": "configuring", 00:09:34.668 "raid_level": "concat", 00:09:34.668 "superblock": true, 00:09:34.668 "num_base_bdevs": 3, 00:09:34.668 "num_base_bdevs_discovered": 2, 00:09:34.668 "num_base_bdevs_operational": 3, 00:09:34.668 "base_bdevs_list": [ 00:09:34.668 { 00:09:34.668 "name": "BaseBdev1", 00:09:34.668 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:34.668 "is_configured": true, 00:09:34.668 "data_offset": 2048, 00:09:34.668 "data_size": 63488 00:09:34.668 }, 00:09:34.668 { 00:09:34.668 "name": null, 00:09:34.668 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:34.668 "is_configured": false, 00:09:34.668 "data_offset": 0, 00:09:34.668 "data_size": 63488 00:09:34.668 }, 00:09:34.668 { 00:09:34.668 "name": "BaseBdev3", 00:09:34.668 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:34.668 "is_configured": true, 00:09:34.668 "data_offset": 2048, 00:09:34.668 "data_size": 63488 00:09:34.668 } 00:09:34.668 ] 00:09:34.668 }' 00:09:34.668 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.668 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.237 [2024-12-12 16:06:01.421355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.237 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.238 "name": "Existed_Raid", 00:09:35.238 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:35.238 "strip_size_kb": 64, 00:09:35.238 "state": "configuring", 00:09:35.238 "raid_level": "concat", 00:09:35.238 "superblock": true, 00:09:35.238 "num_base_bdevs": 3, 00:09:35.238 "num_base_bdevs_discovered": 1, 00:09:35.238 "num_base_bdevs_operational": 3, 00:09:35.238 "base_bdevs_list": [ 00:09:35.238 { 00:09:35.238 "name": null, 00:09:35.238 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:35.238 "is_configured": false, 00:09:35.238 "data_offset": 0, 00:09:35.238 "data_size": 63488 00:09:35.238 }, 00:09:35.238 { 00:09:35.238 "name": null, 00:09:35.238 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:35.238 "is_configured": false, 00:09:35.238 "data_offset": 0, 00:09:35.238 "data_size": 63488 00:09:35.238 }, 00:09:35.238 { 00:09:35.238 "name": "BaseBdev3", 00:09:35.238 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:35.238 "is_configured": true, 00:09:35.238 "data_offset": 2048, 00:09:35.238 "data_size": 63488 00:09:35.238 } 00:09:35.238 ] 00:09:35.238 }' 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.238 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.807 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.807 16:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.808 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.808 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.808 16:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.808 [2024-12-12 16:06:02.010586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.808 "name": "Existed_Raid", 00:09:35.808 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:35.808 "strip_size_kb": 64, 00:09:35.808 "state": "configuring", 00:09:35.808 "raid_level": "concat", 00:09:35.808 "superblock": true, 00:09:35.808 "num_base_bdevs": 3, 00:09:35.808 "num_base_bdevs_discovered": 2, 00:09:35.808 "num_base_bdevs_operational": 3, 00:09:35.808 "base_bdevs_list": [ 00:09:35.808 { 00:09:35.808 "name": null, 00:09:35.808 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:35.808 "is_configured": false, 00:09:35.808 "data_offset": 0, 00:09:35.808 "data_size": 63488 00:09:35.808 }, 00:09:35.808 { 00:09:35.808 "name": "BaseBdev2", 00:09:35.808 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:35.808 "is_configured": true, 00:09:35.808 "data_offset": 2048, 00:09:35.808 "data_size": 63488 00:09:35.808 }, 00:09:35.808 { 00:09:35.808 "name": "BaseBdev3", 00:09:35.808 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:35.808 "is_configured": true, 00:09:35.808 "data_offset": 2048, 00:09:35.808 "data_size": 63488 00:09:35.808 } 00:09:35.808 ] 00:09:35.808 }' 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.808 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.378 [2024-12-12 16:06:02.576515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.378 [2024-12-12 16:06:02.576871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:36.378 [2024-12-12 16:06:02.576948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:36.378 [2024-12-12 16:06:02.577246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:36.378 NewBaseBdev 00:09:36.378 [2024-12-12 16:06:02.577442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:36.378 [2024-12-12 16:06:02.577482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:36.378 [2024-12-12 16:06:02.577669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.378 [ 00:09:36.378 { 00:09:36.378 "name": "NewBaseBdev", 00:09:36.378 "aliases": [ 00:09:36.378 "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd" 00:09:36.378 ], 00:09:36.378 "product_name": "Malloc disk", 00:09:36.378 "block_size": 512, 00:09:36.378 "num_blocks": 65536, 00:09:36.378 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:36.378 "assigned_rate_limits": { 00:09:36.378 "rw_ios_per_sec": 0, 00:09:36.378 "rw_mbytes_per_sec": 0, 00:09:36.378 "r_mbytes_per_sec": 0, 00:09:36.378 "w_mbytes_per_sec": 0 00:09:36.378 }, 00:09:36.378 "claimed": true, 00:09:36.378 "claim_type": "exclusive_write", 00:09:36.378 "zoned": false, 00:09:36.378 "supported_io_types": { 00:09:36.378 "read": true, 00:09:36.378 "write": true, 00:09:36.378 "unmap": true, 00:09:36.378 "flush": true, 00:09:36.378 "reset": true, 00:09:36.378 "nvme_admin": false, 00:09:36.378 "nvme_io": false, 00:09:36.378 "nvme_io_md": false, 00:09:36.378 "write_zeroes": true, 00:09:36.378 "zcopy": true, 00:09:36.378 "get_zone_info": false, 00:09:36.378 "zone_management": false, 00:09:36.378 "zone_append": false, 00:09:36.378 "compare": false, 00:09:36.378 "compare_and_write": false, 00:09:36.378 "abort": true, 00:09:36.378 "seek_hole": false, 00:09:36.378 "seek_data": false, 00:09:36.378 "copy": true, 00:09:36.378 "nvme_iov_md": false 00:09:36.378 }, 00:09:36.378 "memory_domains": [ 00:09:36.378 { 00:09:36.378 "dma_device_id": "system", 00:09:36.378 "dma_device_type": 1 00:09:36.378 }, 00:09:36.378 { 00:09:36.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.378 "dma_device_type": 2 00:09:36.378 } 00:09:36.378 ], 00:09:36.378 "driver_specific": {} 00:09:36.378 } 00:09:36.378 ] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.378 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.379 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.379 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.379 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.379 "name": "Existed_Raid", 00:09:36.379 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:36.379 "strip_size_kb": 64, 00:09:36.379 "state": "online", 00:09:36.379 "raid_level": "concat", 00:09:36.379 "superblock": true, 00:09:36.379 "num_base_bdevs": 3, 00:09:36.379 "num_base_bdevs_discovered": 3, 00:09:36.379 "num_base_bdevs_operational": 3, 00:09:36.379 "base_bdevs_list": [ 00:09:36.379 { 00:09:36.379 "name": "NewBaseBdev", 00:09:36.379 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:36.379 "is_configured": true, 00:09:36.379 "data_offset": 2048, 00:09:36.379 "data_size": 63488 00:09:36.379 }, 00:09:36.379 { 00:09:36.379 "name": "BaseBdev2", 00:09:36.379 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:36.379 "is_configured": true, 00:09:36.379 "data_offset": 2048, 00:09:36.379 "data_size": 63488 00:09:36.379 }, 00:09:36.379 { 00:09:36.379 "name": "BaseBdev3", 00:09:36.379 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:36.379 "is_configured": true, 00:09:36.379 "data_offset": 2048, 00:09:36.379 "data_size": 63488 00:09:36.379 } 00:09:36.379 ] 00:09:36.379 }' 00:09:36.379 16:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.379 16:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.948 [2024-12-12 16:06:03.040177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.948 "name": "Existed_Raid", 00:09:36.948 "aliases": [ 00:09:36.948 "7fd3d438-f318-47ec-8542-657d9f544904" 00:09:36.948 ], 00:09:36.948 "product_name": "Raid Volume", 00:09:36.948 "block_size": 512, 00:09:36.948 "num_blocks": 190464, 00:09:36.948 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:36.948 "assigned_rate_limits": { 00:09:36.948 "rw_ios_per_sec": 0, 00:09:36.948 "rw_mbytes_per_sec": 0, 00:09:36.948 "r_mbytes_per_sec": 0, 00:09:36.948 "w_mbytes_per_sec": 0 00:09:36.948 }, 00:09:36.948 "claimed": false, 00:09:36.948 "zoned": false, 00:09:36.948 "supported_io_types": { 00:09:36.948 "read": true, 00:09:36.948 "write": true, 00:09:36.948 "unmap": true, 00:09:36.948 "flush": true, 00:09:36.948 "reset": true, 00:09:36.948 "nvme_admin": false, 00:09:36.948 "nvme_io": false, 00:09:36.948 "nvme_io_md": false, 00:09:36.948 "write_zeroes": true, 00:09:36.948 "zcopy": false, 00:09:36.948 "get_zone_info": false, 00:09:36.948 "zone_management": false, 00:09:36.948 "zone_append": false, 00:09:36.948 "compare": false, 00:09:36.948 "compare_and_write": false, 00:09:36.948 "abort": false, 00:09:36.948 "seek_hole": false, 00:09:36.948 "seek_data": false, 00:09:36.948 "copy": false, 00:09:36.948 "nvme_iov_md": false 00:09:36.948 }, 00:09:36.948 "memory_domains": [ 00:09:36.948 { 00:09:36.948 "dma_device_id": "system", 00:09:36.948 "dma_device_type": 1 00:09:36.948 }, 00:09:36.948 { 00:09:36.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.948 "dma_device_type": 2 00:09:36.948 }, 00:09:36.948 { 00:09:36.948 "dma_device_id": "system", 00:09:36.948 "dma_device_type": 1 00:09:36.948 }, 00:09:36.948 { 00:09:36.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.948 "dma_device_type": 2 00:09:36.948 }, 00:09:36.948 { 00:09:36.948 "dma_device_id": "system", 00:09:36.948 "dma_device_type": 1 00:09:36.948 }, 00:09:36.948 { 00:09:36.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.948 "dma_device_type": 2 00:09:36.948 } 00:09:36.948 ], 00:09:36.948 "driver_specific": { 00:09:36.948 "raid": { 00:09:36.948 "uuid": "7fd3d438-f318-47ec-8542-657d9f544904", 00:09:36.948 "strip_size_kb": 64, 00:09:36.948 "state": "online", 00:09:36.948 "raid_level": "concat", 00:09:36.948 "superblock": true, 00:09:36.948 "num_base_bdevs": 3, 00:09:36.948 "num_base_bdevs_discovered": 3, 00:09:36.948 "num_base_bdevs_operational": 3, 00:09:36.948 "base_bdevs_list": [ 00:09:36.948 { 00:09:36.948 "name": "NewBaseBdev", 00:09:36.948 "uuid": "36d9c57b-2a9b-4a73-a5f9-78a2484dd9bd", 00:09:36.948 "is_configured": true, 00:09:36.948 "data_offset": 2048, 00:09:36.948 "data_size": 63488 00:09:36.948 }, 00:09:36.948 { 00:09:36.948 "name": "BaseBdev2", 00:09:36.948 "uuid": "adf9cbfc-75ae-4642-9073-c6ea2713b358", 00:09:36.948 "is_configured": true, 00:09:36.948 "data_offset": 2048, 00:09:36.948 "data_size": 63488 00:09:36.948 }, 00:09:36.948 { 00:09:36.948 "name": "BaseBdev3", 00:09:36.948 "uuid": "5567859a-78d2-42ee-b8f9-305bc58daa50", 00:09:36.948 "is_configured": true, 00:09:36.948 "data_offset": 2048, 00:09:36.948 "data_size": 63488 00:09:36.948 } 00:09:36.948 ] 00:09:36.948 } 00:09:36.948 } 00:09:36.948 }' 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:36.948 BaseBdev2 00:09:36.948 BaseBdev3' 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.948 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.949 [2024-12-12 16:06:03.279429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.949 [2024-12-12 16:06:03.279465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.949 [2024-12-12 16:06:03.279553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.949 [2024-12-12 16:06:03.279628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.949 [2024-12-12 16:06:03.279643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68255 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68255 ']' 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68255 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.949 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68255 00:09:37.208 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.208 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.208 killing process with pid 68255 00:09:37.208 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68255' 00:09:37.209 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68255 00:09:37.209 [2024-12-12 16:06:03.316348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.209 16:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68255 00:09:37.473 [2024-12-12 16:06:03.652789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.858 16:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.858 00:09:38.858 real 0m10.441s 00:09:38.858 user 0m16.160s 00:09:38.858 sys 0m1.858s 00:09:38.858 ************************************ 00:09:38.858 END TEST raid_state_function_test_sb 00:09:38.858 ************************************ 00:09:38.858 16:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.858 16:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.858 16:06:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:38.858 16:06:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.858 16:06:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.858 16:06:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.858 ************************************ 00:09:38.858 START TEST raid_superblock_test 00:09:38.858 ************************************ 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:38.858 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:38.859 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:38.859 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:38.859 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:38.859 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:38.859 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:38.859 16:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68875 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68875 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68875 ']' 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.859 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.859 [2024-12-12 16:06:05.088348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:38.859 [2024-12-12 16:06:05.089001] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68875 ] 00:09:39.117 [2024-12-12 16:06:05.245274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.117 [2024-12-12 16:06:05.384448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.376 [2024-12-12 16:06:05.633377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.376 [2024-12-12 16:06:05.633564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.635 malloc1 00:09:39.635 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.636 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.636 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.636 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 [2024-12-12 16:06:05.982279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.636 [2024-12-12 16:06:05.982356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.636 [2024-12-12 16:06:05.982381] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:39.636 [2024-12-12 16:06:05.982392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.636 [2024-12-12 16:06:05.984831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.636 [2024-12-12 16:06:05.984973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.896 pt1 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.896 16:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.896 malloc2 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.896 [2024-12-12 16:06:06.049417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.896 [2024-12-12 16:06:06.049544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.896 [2024-12-12 16:06:06.049587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:39.896 [2024-12-12 16:06:06.049616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.896 [2024-12-12 16:06:06.051982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.896 [2024-12-12 16:06:06.052051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.896 pt2 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.896 malloc3 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.896 [2024-12-12 16:06:06.136549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:39.896 [2024-12-12 16:06:06.136672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.896 [2024-12-12 16:06:06.136714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.896 [2024-12-12 16:06:06.136744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.896 [2024-12-12 16:06:06.139115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.896 [2024-12-12 16:06:06.139186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:39.896 pt3 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.896 [2024-12-12 16:06:06.148585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.896 [2024-12-12 16:06:06.150696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.896 [2024-12-12 16:06:06.150761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:39.896 [2024-12-12 16:06:06.150945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:39.896 [2024-12-12 16:06:06.150961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.896 [2024-12-12 16:06:06.151235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:39.896 [2024-12-12 16:06:06.151405] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:39.896 [2024-12-12 16:06:06.151415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:39.896 [2024-12-12 16:06:06.151577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.896 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.896 "name": "raid_bdev1", 00:09:39.896 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:39.896 "strip_size_kb": 64, 00:09:39.896 "state": "online", 00:09:39.896 "raid_level": "concat", 00:09:39.896 "superblock": true, 00:09:39.896 "num_base_bdevs": 3, 00:09:39.896 "num_base_bdevs_discovered": 3, 00:09:39.896 "num_base_bdevs_operational": 3, 00:09:39.896 "base_bdevs_list": [ 00:09:39.896 { 00:09:39.896 "name": "pt1", 00:09:39.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.896 "is_configured": true, 00:09:39.896 "data_offset": 2048, 00:09:39.896 "data_size": 63488 00:09:39.896 }, 00:09:39.896 { 00:09:39.896 "name": "pt2", 00:09:39.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.896 "is_configured": true, 00:09:39.896 "data_offset": 2048, 00:09:39.896 "data_size": 63488 00:09:39.896 }, 00:09:39.896 { 00:09:39.897 "name": "pt3", 00:09:39.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.897 "is_configured": true, 00:09:39.897 "data_offset": 2048, 00:09:39.897 "data_size": 63488 00:09:39.897 } 00:09:39.897 ] 00:09:39.897 }' 00:09:39.897 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.897 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.465 [2024-12-12 16:06:06.544291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.465 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.465 "name": "raid_bdev1", 00:09:40.465 "aliases": [ 00:09:40.465 "c6daa3b1-b427-415a-b58e-99368c3abd4d" 00:09:40.465 ], 00:09:40.465 "product_name": "Raid Volume", 00:09:40.465 "block_size": 512, 00:09:40.465 "num_blocks": 190464, 00:09:40.465 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:40.465 "assigned_rate_limits": { 00:09:40.465 "rw_ios_per_sec": 0, 00:09:40.465 "rw_mbytes_per_sec": 0, 00:09:40.465 "r_mbytes_per_sec": 0, 00:09:40.465 "w_mbytes_per_sec": 0 00:09:40.465 }, 00:09:40.465 "claimed": false, 00:09:40.465 "zoned": false, 00:09:40.465 "supported_io_types": { 00:09:40.466 "read": true, 00:09:40.466 "write": true, 00:09:40.466 "unmap": true, 00:09:40.466 "flush": true, 00:09:40.466 "reset": true, 00:09:40.466 "nvme_admin": false, 00:09:40.466 "nvme_io": false, 00:09:40.466 "nvme_io_md": false, 00:09:40.466 "write_zeroes": true, 00:09:40.466 "zcopy": false, 00:09:40.466 "get_zone_info": false, 00:09:40.466 "zone_management": false, 00:09:40.466 "zone_append": false, 00:09:40.466 "compare": false, 00:09:40.466 "compare_and_write": false, 00:09:40.466 "abort": false, 00:09:40.466 "seek_hole": false, 00:09:40.466 "seek_data": false, 00:09:40.466 "copy": false, 00:09:40.466 "nvme_iov_md": false 00:09:40.466 }, 00:09:40.466 "memory_domains": [ 00:09:40.466 { 00:09:40.466 "dma_device_id": "system", 00:09:40.466 "dma_device_type": 1 00:09:40.466 }, 00:09:40.466 { 00:09:40.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.466 "dma_device_type": 2 00:09:40.466 }, 00:09:40.466 { 00:09:40.466 "dma_device_id": "system", 00:09:40.466 "dma_device_type": 1 00:09:40.466 }, 00:09:40.466 { 00:09:40.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.466 "dma_device_type": 2 00:09:40.466 }, 00:09:40.466 { 00:09:40.466 "dma_device_id": "system", 00:09:40.466 "dma_device_type": 1 00:09:40.466 }, 00:09:40.466 { 00:09:40.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.466 "dma_device_type": 2 00:09:40.466 } 00:09:40.466 ], 00:09:40.466 "driver_specific": { 00:09:40.466 "raid": { 00:09:40.466 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:40.466 "strip_size_kb": 64, 00:09:40.466 "state": "online", 00:09:40.466 "raid_level": "concat", 00:09:40.466 "superblock": true, 00:09:40.466 "num_base_bdevs": 3, 00:09:40.466 "num_base_bdevs_discovered": 3, 00:09:40.466 "num_base_bdevs_operational": 3, 00:09:40.466 "base_bdevs_list": [ 00:09:40.466 { 00:09:40.466 "name": "pt1", 00:09:40.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.466 "is_configured": true, 00:09:40.466 "data_offset": 2048, 00:09:40.466 "data_size": 63488 00:09:40.466 }, 00:09:40.466 { 00:09:40.466 "name": "pt2", 00:09:40.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.466 "is_configured": true, 00:09:40.466 "data_offset": 2048, 00:09:40.466 "data_size": 63488 00:09:40.466 }, 00:09:40.466 { 00:09:40.466 "name": "pt3", 00:09:40.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.466 "is_configured": true, 00:09:40.466 "data_offset": 2048, 00:09:40.466 "data_size": 63488 00:09:40.466 } 00:09:40.466 ] 00:09:40.466 } 00:09:40.466 } 00:09:40.466 }' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.466 pt2 00:09:40.466 pt3' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.466 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.466 [2024-12-12 16:06:06.815643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c6daa3b1-b427-415a-b58e-99368c3abd4d 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c6daa3b1-b427-415a-b58e-99368c3abd4d ']' 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 [2024-12-12 16:06:06.863295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.726 [2024-12-12 16:06:06.863364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.726 [2024-12-12 16:06:06.863473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.726 [2024-12-12 16:06:06.863567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.726 [2024-12-12 16:06:06.863610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.727 16:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.727 [2024-12-12 16:06:07.003154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.727 [2024-12-12 16:06:07.005337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:40.727 [2024-12-12 16:06:07.005434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:40.727 [2024-12-12 16:06:07.005511] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:40.727 [2024-12-12 16:06:07.005569] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:40.727 [2024-12-12 16:06:07.005587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:40.727 [2024-12-12 16:06:07.005604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.727 [2024-12-12 16:06:07.005613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:40.727 request: 00:09:40.727 { 00:09:40.727 "name": "raid_bdev1", 00:09:40.727 "raid_level": "concat", 00:09:40.727 "base_bdevs": [ 00:09:40.727 "malloc1", 00:09:40.727 "malloc2", 00:09:40.727 "malloc3" 00:09:40.727 ], 00:09:40.727 "strip_size_kb": 64, 00:09:40.727 "superblock": false, 00:09:40.727 "method": "bdev_raid_create", 00:09:40.727 "req_id": 1 00:09:40.727 } 00:09:40.727 Got JSON-RPC error response 00:09:40.727 response: 00:09:40.727 { 00:09:40.727 "code": -17, 00:09:40.727 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:40.727 } 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.727 [2024-12-12 16:06:07.066986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.727 [2024-12-12 16:06:07.067069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.727 [2024-12-12 16:06:07.067104] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:40.727 [2024-12-12 16:06:07.067131] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.727 [2024-12-12 16:06:07.069444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.727 [2024-12-12 16:06:07.069511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.727 [2024-12-12 16:06:07.069616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:40.727 [2024-12-12 16:06:07.069693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:40.727 pt1 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.727 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.987 "name": "raid_bdev1", 00:09:40.987 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:40.987 "strip_size_kb": 64, 00:09:40.987 "state": "configuring", 00:09:40.987 "raid_level": "concat", 00:09:40.987 "superblock": true, 00:09:40.987 "num_base_bdevs": 3, 00:09:40.987 "num_base_bdevs_discovered": 1, 00:09:40.987 "num_base_bdevs_operational": 3, 00:09:40.987 "base_bdevs_list": [ 00:09:40.987 { 00:09:40.987 "name": "pt1", 00:09:40.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.987 "is_configured": true, 00:09:40.987 "data_offset": 2048, 00:09:40.987 "data_size": 63488 00:09:40.987 }, 00:09:40.987 { 00:09:40.987 "name": null, 00:09:40.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.987 "is_configured": false, 00:09:40.987 "data_offset": 2048, 00:09:40.987 "data_size": 63488 00:09:40.987 }, 00:09:40.987 { 00:09:40.987 "name": null, 00:09:40.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.987 "is_configured": false, 00:09:40.987 "data_offset": 2048, 00:09:40.987 "data_size": 63488 00:09:40.987 } 00:09:40.987 ] 00:09:40.987 }' 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.987 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.247 [2024-12-12 16:06:07.498301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.247 [2024-12-12 16:06:07.498470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.247 [2024-12-12 16:06:07.498503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:41.247 [2024-12-12 16:06:07.498513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.247 [2024-12-12 16:06:07.499047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.247 [2024-12-12 16:06:07.499066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.247 [2024-12-12 16:06:07.499169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.247 [2024-12-12 16:06:07.499200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.247 pt2 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.247 [2024-12-12 16:06:07.510261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.247 "name": "raid_bdev1", 00:09:41.247 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:41.247 "strip_size_kb": 64, 00:09:41.247 "state": "configuring", 00:09:41.247 "raid_level": "concat", 00:09:41.247 "superblock": true, 00:09:41.247 "num_base_bdevs": 3, 00:09:41.247 "num_base_bdevs_discovered": 1, 00:09:41.247 "num_base_bdevs_operational": 3, 00:09:41.247 "base_bdevs_list": [ 00:09:41.247 { 00:09:41.247 "name": "pt1", 00:09:41.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.247 "is_configured": true, 00:09:41.247 "data_offset": 2048, 00:09:41.247 "data_size": 63488 00:09:41.247 }, 00:09:41.247 { 00:09:41.247 "name": null, 00:09:41.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.247 "is_configured": false, 00:09:41.247 "data_offset": 0, 00:09:41.247 "data_size": 63488 00:09:41.247 }, 00:09:41.247 { 00:09:41.247 "name": null, 00:09:41.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.247 "is_configured": false, 00:09:41.247 "data_offset": 2048, 00:09:41.247 "data_size": 63488 00:09:41.247 } 00:09:41.247 ] 00:09:41.247 }' 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.247 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.816 [2024-12-12 16:06:07.937521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.816 [2024-12-12 16:06:07.937679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.816 [2024-12-12 16:06:07.937717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:41.816 [2024-12-12 16:06:07.937748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.816 [2024-12-12 16:06:07.938305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.816 [2024-12-12 16:06:07.938366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.816 [2024-12-12 16:06:07.938488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.816 [2024-12-12 16:06:07.938542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.816 pt2 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.816 [2024-12-12 16:06:07.949459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.816 [2024-12-12 16:06:07.949543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.816 [2024-12-12 16:06:07.949573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:41.816 [2024-12-12 16:06:07.949602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.816 [2024-12-12 16:06:07.950028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.816 [2024-12-12 16:06:07.950089] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.816 [2024-12-12 16:06:07.950173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:41.816 [2024-12-12 16:06:07.950220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.816 [2024-12-12 16:06:07.950360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.816 [2024-12-12 16:06:07.950398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.816 [2024-12-12 16:06:07.950678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:41.816 [2024-12-12 16:06:07.950882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.816 [2024-12-12 16:06:07.950935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:41.816 [2024-12-12 16:06:07.951123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.816 pt3 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.816 16:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.816 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.816 "name": "raid_bdev1", 00:09:41.816 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:41.816 "strip_size_kb": 64, 00:09:41.816 "state": "online", 00:09:41.816 "raid_level": "concat", 00:09:41.816 "superblock": true, 00:09:41.816 "num_base_bdevs": 3, 00:09:41.816 "num_base_bdevs_discovered": 3, 00:09:41.816 "num_base_bdevs_operational": 3, 00:09:41.816 "base_bdevs_list": [ 00:09:41.816 { 00:09:41.816 "name": "pt1", 00:09:41.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.816 "is_configured": true, 00:09:41.816 "data_offset": 2048, 00:09:41.817 "data_size": 63488 00:09:41.817 }, 00:09:41.817 { 00:09:41.817 "name": "pt2", 00:09:41.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.817 "is_configured": true, 00:09:41.817 "data_offset": 2048, 00:09:41.817 "data_size": 63488 00:09:41.817 }, 00:09:41.817 { 00:09:41.817 "name": "pt3", 00:09:41.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.817 "is_configured": true, 00:09:41.817 "data_offset": 2048, 00:09:41.817 "data_size": 63488 00:09:41.817 } 00:09:41.817 ] 00:09:41.817 }' 00:09:41.817 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.817 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.386 [2024-12-12 16:06:08.449079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.386 "name": "raid_bdev1", 00:09:42.386 "aliases": [ 00:09:42.386 "c6daa3b1-b427-415a-b58e-99368c3abd4d" 00:09:42.386 ], 00:09:42.386 "product_name": "Raid Volume", 00:09:42.386 "block_size": 512, 00:09:42.386 "num_blocks": 190464, 00:09:42.386 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:42.386 "assigned_rate_limits": { 00:09:42.386 "rw_ios_per_sec": 0, 00:09:42.386 "rw_mbytes_per_sec": 0, 00:09:42.386 "r_mbytes_per_sec": 0, 00:09:42.386 "w_mbytes_per_sec": 0 00:09:42.386 }, 00:09:42.386 "claimed": false, 00:09:42.386 "zoned": false, 00:09:42.386 "supported_io_types": { 00:09:42.386 "read": true, 00:09:42.386 "write": true, 00:09:42.386 "unmap": true, 00:09:42.386 "flush": true, 00:09:42.386 "reset": true, 00:09:42.386 "nvme_admin": false, 00:09:42.386 "nvme_io": false, 00:09:42.386 "nvme_io_md": false, 00:09:42.386 "write_zeroes": true, 00:09:42.386 "zcopy": false, 00:09:42.386 "get_zone_info": false, 00:09:42.386 "zone_management": false, 00:09:42.386 "zone_append": false, 00:09:42.386 "compare": false, 00:09:42.386 "compare_and_write": false, 00:09:42.386 "abort": false, 00:09:42.386 "seek_hole": false, 00:09:42.386 "seek_data": false, 00:09:42.386 "copy": false, 00:09:42.386 "nvme_iov_md": false 00:09:42.386 }, 00:09:42.386 "memory_domains": [ 00:09:42.386 { 00:09:42.386 "dma_device_id": "system", 00:09:42.386 "dma_device_type": 1 00:09:42.386 }, 00:09:42.386 { 00:09:42.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.386 "dma_device_type": 2 00:09:42.386 }, 00:09:42.386 { 00:09:42.386 "dma_device_id": "system", 00:09:42.386 "dma_device_type": 1 00:09:42.386 }, 00:09:42.386 { 00:09:42.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.386 "dma_device_type": 2 00:09:42.386 }, 00:09:42.386 { 00:09:42.386 "dma_device_id": "system", 00:09:42.386 "dma_device_type": 1 00:09:42.386 }, 00:09:42.386 { 00:09:42.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.386 "dma_device_type": 2 00:09:42.386 } 00:09:42.386 ], 00:09:42.386 "driver_specific": { 00:09:42.386 "raid": { 00:09:42.386 "uuid": "c6daa3b1-b427-415a-b58e-99368c3abd4d", 00:09:42.386 "strip_size_kb": 64, 00:09:42.386 "state": "online", 00:09:42.386 "raid_level": "concat", 00:09:42.386 "superblock": true, 00:09:42.386 "num_base_bdevs": 3, 00:09:42.386 "num_base_bdevs_discovered": 3, 00:09:42.386 "num_base_bdevs_operational": 3, 00:09:42.386 "base_bdevs_list": [ 00:09:42.386 { 00:09:42.386 "name": "pt1", 00:09:42.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.386 "is_configured": true, 00:09:42.386 "data_offset": 2048, 00:09:42.386 "data_size": 63488 00:09:42.386 }, 00:09:42.386 { 00:09:42.386 "name": "pt2", 00:09:42.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.386 "is_configured": true, 00:09:42.386 "data_offset": 2048, 00:09:42.386 "data_size": 63488 00:09:42.386 }, 00:09:42.386 { 00:09:42.386 "name": "pt3", 00:09:42.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.386 "is_configured": true, 00:09:42.386 "data_offset": 2048, 00:09:42.386 "data_size": 63488 00:09:42.386 } 00:09:42.386 ] 00:09:42.386 } 00:09:42.386 } 00:09:42.386 }' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:42.386 pt2 00:09:42.386 pt3' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.386 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.387 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 [2024-12-12 16:06:08.728533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c6daa3b1-b427-415a-b58e-99368c3abd4d '!=' c6daa3b1-b427-415a-b58e-99368c3abd4d ']' 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68875 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68875 ']' 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68875 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68875 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68875' 00:09:42.646 killing process with pid 68875 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68875 00:09:42.646 [2024-12-12 16:06:08.801904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.646 [2024-12-12 16:06:08.802097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.646 16:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68875 00:09:42.646 [2024-12-12 16:06:08.802210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.646 [2024-12-12 16:06:08.802258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:42.906 [2024-12-12 16:06:09.137473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.286 16:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:44.286 00:09:44.286 real 0m5.383s 00:09:44.286 user 0m7.563s 00:09:44.286 sys 0m0.949s 00:09:44.286 ************************************ 00:09:44.286 END TEST raid_superblock_test 00:09:44.286 ************************************ 00:09:44.286 16:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.286 16:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.286 16:06:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:44.286 16:06:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.286 16:06:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.286 16:06:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.286 ************************************ 00:09:44.286 START TEST raid_read_error_test 00:09:44.286 ************************************ 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.286 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mtU3vTCiF3 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69134 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69134 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69134 ']' 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.287 16:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.287 [2024-12-12 16:06:10.555178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:44.287 [2024-12-12 16:06:10.555422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69134 ] 00:09:44.546 [2024-12-12 16:06:10.713764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.546 [2024-12-12 16:06:10.855602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.806 [2024-12-12 16:06:11.105489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.806 [2024-12-12 16:06:11.105542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.066 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.066 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.066 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.066 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.066 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.067 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.326 BaseBdev1_malloc 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.326 true 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.326 [2024-12-12 16:06:11.449075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.326 [2024-12-12 16:06:11.449207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.326 [2024-12-12 16:06:11.449233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:45.326 [2024-12-12 16:06:11.449246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.326 [2024-12-12 16:06:11.451805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.326 [2024-12-12 16:06:11.451847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.326 BaseBdev1 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.326 BaseBdev2_malloc 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.326 true 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.326 [2024-12-12 16:06:11.525442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.326 [2024-12-12 16:06:11.525516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.326 [2024-12-12 16:06:11.525536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:45.326 [2024-12-12 16:06:11.525549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.326 [2024-12-12 16:06:11.528152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.326 [2024-12-12 16:06:11.528276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.326 BaseBdev2 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.326 BaseBdev3_malloc 00:09:45.326 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.327 true 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.327 [2024-12-12 16:06:11.607069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:45.327 [2024-12-12 16:06:11.607147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.327 [2024-12-12 16:06:11.607164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:45.327 [2024-12-12 16:06:11.607176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.327 [2024-12-12 16:06:11.609614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.327 [2024-12-12 16:06:11.609733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:45.327 BaseBdev3 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.327 [2024-12-12 16:06:11.615141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.327 [2024-12-12 16:06:11.617240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.327 [2024-12-12 16:06:11.617351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.327 [2024-12-12 16:06:11.617627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:45.327 [2024-12-12 16:06:11.617675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.327 [2024-12-12 16:06:11.617950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:45.327 [2024-12-12 16:06:11.618144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:45.327 [2024-12-12 16:06:11.618189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:45.327 [2024-12-12 16:06:11.618364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.327 "name": "raid_bdev1", 00:09:45.327 "uuid": "b71a375a-c5e2-433f-9dab-0c5eba203158", 00:09:45.327 "strip_size_kb": 64, 00:09:45.327 "state": "online", 00:09:45.327 "raid_level": "concat", 00:09:45.327 "superblock": true, 00:09:45.327 "num_base_bdevs": 3, 00:09:45.327 "num_base_bdevs_discovered": 3, 00:09:45.327 "num_base_bdevs_operational": 3, 00:09:45.327 "base_bdevs_list": [ 00:09:45.327 { 00:09:45.327 "name": "BaseBdev1", 00:09:45.327 "uuid": "86cd618b-3c40-5123-8743-639d1f30026b", 00:09:45.327 "is_configured": true, 00:09:45.327 "data_offset": 2048, 00:09:45.327 "data_size": 63488 00:09:45.327 }, 00:09:45.327 { 00:09:45.327 "name": "BaseBdev2", 00:09:45.327 "uuid": "f8b8c280-c220-5aa5-82bd-82dcff6d7171", 00:09:45.327 "is_configured": true, 00:09:45.327 "data_offset": 2048, 00:09:45.327 "data_size": 63488 00:09:45.327 }, 00:09:45.327 { 00:09:45.327 "name": "BaseBdev3", 00:09:45.327 "uuid": "3354a72b-3424-5d37-a759-b0836cf6be89", 00:09:45.327 "is_configured": true, 00:09:45.327 "data_offset": 2048, 00:09:45.327 "data_size": 63488 00:09:45.327 } 00:09:45.327 ] 00:09:45.327 }' 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.327 16:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.897 16:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:45.897 16:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:45.897 [2024-12-12 16:06:12.124075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.871 "name": "raid_bdev1", 00:09:46.871 "uuid": "b71a375a-c5e2-433f-9dab-0c5eba203158", 00:09:46.871 "strip_size_kb": 64, 00:09:46.871 "state": "online", 00:09:46.871 "raid_level": "concat", 00:09:46.871 "superblock": true, 00:09:46.871 "num_base_bdevs": 3, 00:09:46.871 "num_base_bdevs_discovered": 3, 00:09:46.871 "num_base_bdevs_operational": 3, 00:09:46.871 "base_bdevs_list": [ 00:09:46.871 { 00:09:46.871 "name": "BaseBdev1", 00:09:46.871 "uuid": "86cd618b-3c40-5123-8743-639d1f30026b", 00:09:46.871 "is_configured": true, 00:09:46.871 "data_offset": 2048, 00:09:46.871 "data_size": 63488 00:09:46.871 }, 00:09:46.871 { 00:09:46.871 "name": "BaseBdev2", 00:09:46.871 "uuid": "f8b8c280-c220-5aa5-82bd-82dcff6d7171", 00:09:46.871 "is_configured": true, 00:09:46.871 "data_offset": 2048, 00:09:46.871 "data_size": 63488 00:09:46.871 }, 00:09:46.871 { 00:09:46.871 "name": "BaseBdev3", 00:09:46.871 "uuid": "3354a72b-3424-5d37-a759-b0836cf6be89", 00:09:46.871 "is_configured": true, 00:09:46.871 "data_offset": 2048, 00:09:46.871 "data_size": 63488 00:09:46.871 } 00:09:46.871 ] 00:09:46.871 }' 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.871 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.440 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.441 [2024-12-12 16:06:13.517472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.441 [2024-12-12 16:06:13.517625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.441 [2024-12-12 16:06:13.520552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.441 [2024-12-12 16:06:13.520647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.441 [2024-12-12 16:06:13.520715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.441 [2024-12-12 16:06:13.520757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:47.441 { 00:09:47.441 "results": [ 00:09:47.441 { 00:09:47.441 "job": "raid_bdev1", 00:09:47.441 "core_mask": "0x1", 00:09:47.441 "workload": "randrw", 00:09:47.441 "percentage": 50, 00:09:47.441 "status": "finished", 00:09:47.441 "queue_depth": 1, 00:09:47.441 "io_size": 131072, 00:09:47.441 "runtime": 1.393892, 00:09:47.441 "iops": 13279.364541872685, 00:09:47.441 "mibps": 1659.9205677340856, 00:09:47.441 "io_failed": 1, 00:09:47.441 "io_timeout": 0, 00:09:47.441 "avg_latency_us": 105.6378419629636, 00:09:47.441 "min_latency_us": 27.612227074235808, 00:09:47.441 "max_latency_us": 1409.4532751091704 00:09:47.441 } 00:09:47.441 ], 00:09:47.441 "core_count": 1 00:09:47.441 } 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69134 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69134 ']' 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69134 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69134 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69134' 00:09:47.441 killing process with pid 69134 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69134 00:09:47.441 [2024-12-12 16:06:13.564968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.441 16:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69134 00:09:47.700 [2024-12-12 16:06:13.827395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mtU3vTCiF3 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:49.079 00:09:49.079 real 0m4.712s 00:09:49.079 user 0m5.455s 00:09:49.079 sys 0m0.666s 00:09:49.079 ************************************ 00:09:49.079 END TEST raid_read_error_test 00:09:49.079 ************************************ 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.079 16:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.079 16:06:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:49.079 16:06:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.079 16:06:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.079 16:06:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.079 ************************************ 00:09:49.079 START TEST raid_write_error_test 00:09:49.079 ************************************ 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zXaubzfA4W 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69274 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69274 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69274 ']' 00:09:49.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.079 16:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.079 [2024-12-12 16:06:15.340331] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:49.079 [2024-12-12 16:06:15.340440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69274 ] 00:09:49.338 [2024-12-12 16:06:15.512980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.338 [2024-12-12 16:06:15.648787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.597 [2024-12-12 16:06:15.886868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.597 [2024-12-12 16:06:15.886946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.856 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.856 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.856 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.856 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.856 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.856 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.117 BaseBdev1_malloc 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.117 true 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.117 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 [2024-12-12 16:06:16.235297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:50.118 [2024-12-12 16:06:16.235367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.118 [2024-12-12 16:06:16.235390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:50.118 [2024-12-12 16:06:16.235402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.118 [2024-12-12 16:06:16.237854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.118 [2024-12-12 16:06:16.237909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:50.118 BaseBdev1 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 BaseBdev2_malloc 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 true 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 [2024-12-12 16:06:16.311432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:50.118 [2024-12-12 16:06:16.311508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.118 [2024-12-12 16:06:16.311527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:50.118 [2024-12-12 16:06:16.311541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.118 [2024-12-12 16:06:16.314082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.118 [2024-12-12 16:06:16.314122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:50.118 BaseBdev2 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 BaseBdev3_malloc 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 true 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 [2024-12-12 16:06:16.401457] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.118 [2024-12-12 16:06:16.401529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.118 [2024-12-12 16:06:16.401545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:50.118 [2024-12-12 16:06:16.401556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.118 [2024-12-12 16:06:16.403940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.118 [2024-12-12 16:06:16.403978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.118 BaseBdev3 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 [2024-12-12 16:06:16.413552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.118 [2024-12-12 16:06:16.415568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.118 [2024-12-12 16:06:16.415749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.118 [2024-12-12 16:06:16.415978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.118 [2024-12-12 16:06:16.415993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.118 [2024-12-12 16:06:16.416245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:50.118 [2024-12-12 16:06:16.416428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.118 [2024-12-12 16:06:16.416443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.118 [2024-12-12 16:06:16.416581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.118 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.378 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.378 "name": "raid_bdev1", 00:09:50.378 "uuid": "1e2db995-c0d7-43c5-a084-72b36ad06c01", 00:09:50.378 "strip_size_kb": 64, 00:09:50.378 "state": "online", 00:09:50.378 "raid_level": "concat", 00:09:50.378 "superblock": true, 00:09:50.378 "num_base_bdevs": 3, 00:09:50.378 "num_base_bdevs_discovered": 3, 00:09:50.378 "num_base_bdevs_operational": 3, 00:09:50.378 "base_bdevs_list": [ 00:09:50.378 { 00:09:50.378 "name": "BaseBdev1", 00:09:50.378 "uuid": "9d8d449e-dffd-5608-ad9c-d5c8175e2829", 00:09:50.378 "is_configured": true, 00:09:50.378 "data_offset": 2048, 00:09:50.378 "data_size": 63488 00:09:50.378 }, 00:09:50.378 { 00:09:50.378 "name": "BaseBdev2", 00:09:50.378 "uuid": "9726903c-1633-566a-a7d2-127ccbebe0b3", 00:09:50.378 "is_configured": true, 00:09:50.378 "data_offset": 2048, 00:09:50.378 "data_size": 63488 00:09:50.378 }, 00:09:50.378 { 00:09:50.378 "name": "BaseBdev3", 00:09:50.378 "uuid": "bbc1b11a-79d7-5428-ad1c-c98971503f5d", 00:09:50.378 "is_configured": true, 00:09:50.378 "data_offset": 2048, 00:09:50.378 "data_size": 63488 00:09:50.378 } 00:09:50.378 ] 00:09:50.378 }' 00:09:50.378 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.378 16:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.637 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.637 16:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.637 [2024-12-12 16:06:16.961973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 16:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.836 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.836 "name": "raid_bdev1", 00:09:51.836 "uuid": "1e2db995-c0d7-43c5-a084-72b36ad06c01", 00:09:51.836 "strip_size_kb": 64, 00:09:51.836 "state": "online", 00:09:51.836 "raid_level": "concat", 00:09:51.836 "superblock": true, 00:09:51.836 "num_base_bdevs": 3, 00:09:51.836 "num_base_bdevs_discovered": 3, 00:09:51.836 "num_base_bdevs_operational": 3, 00:09:51.836 "base_bdevs_list": [ 00:09:51.836 { 00:09:51.836 "name": "BaseBdev1", 00:09:51.836 "uuid": "9d8d449e-dffd-5608-ad9c-d5c8175e2829", 00:09:51.836 "is_configured": true, 00:09:51.836 "data_offset": 2048, 00:09:51.836 "data_size": 63488 00:09:51.836 }, 00:09:51.836 { 00:09:51.836 "name": "BaseBdev2", 00:09:51.836 "uuid": "9726903c-1633-566a-a7d2-127ccbebe0b3", 00:09:51.836 "is_configured": true, 00:09:51.836 "data_offset": 2048, 00:09:51.836 "data_size": 63488 00:09:51.836 }, 00:09:51.836 { 00:09:51.836 "name": "BaseBdev3", 00:09:51.836 "uuid": "bbc1b11a-79d7-5428-ad1c-c98971503f5d", 00:09:51.836 "is_configured": true, 00:09:51.836 "data_offset": 2048, 00:09:51.836 "data_size": 63488 00:09:51.836 } 00:09:51.836 ] 00:09:51.836 }' 00:09:51.836 16:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.836 16:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.095 16:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.095 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.095 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.095 [2024-12-12 16:06:18.335105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.095 [2024-12-12 16:06:18.335246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.095 [2024-12-12 16:06:18.337988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.096 [2024-12-12 16:06:18.338081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.096 [2024-12-12 16:06:18.338142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.096 [2024-12-12 16:06:18.338183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:52.096 { 00:09:52.096 "results": [ 00:09:52.096 { 00:09:52.096 "job": "raid_bdev1", 00:09:52.096 "core_mask": "0x1", 00:09:52.096 "workload": "randrw", 00:09:52.096 "percentage": 50, 00:09:52.096 "status": "finished", 00:09:52.096 "queue_depth": 1, 00:09:52.096 "io_size": 131072, 00:09:52.096 "runtime": 1.373758, 00:09:52.096 "iops": 13553.333265393177, 00:09:52.096 "mibps": 1694.1666581741472, 00:09:52.096 "io_failed": 1, 00:09:52.096 "io_timeout": 0, 00:09:52.096 "avg_latency_us": 103.58577366685584, 00:09:52.096 "min_latency_us": 27.165065502183406, 00:09:52.096 "max_latency_us": 1366.5257641921398 00:09:52.096 } 00:09:52.096 ], 00:09:52.096 "core_count": 1 00:09:52.096 } 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69274 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69274 ']' 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69274 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69274 00:09:52.096 killing process with pid 69274 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69274' 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69274 00:09:52.096 [2024-12-12 16:06:18.382491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.096 16:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69274 00:09:52.355 [2024-12-12 16:06:18.640003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zXaubzfA4W 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:53.735 ************************************ 00:09:53.735 END TEST raid_write_error_test 00:09:53.735 ************************************ 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:53.735 00:09:53.735 real 0m4.741s 00:09:53.735 user 0m5.517s 00:09:53.735 sys 0m0.626s 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.735 16:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.735 16:06:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:53.735 16:06:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:53.735 16:06:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.735 16:06:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.735 16:06:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.735 ************************************ 00:09:53.735 START TEST raid_state_function_test 00:09:53.735 ************************************ 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69418 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69418' 00:09:53.735 Process raid pid: 69418 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69418 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69418 ']' 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.735 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.995 [2024-12-12 16:06:20.145556] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:53.995 [2024-12-12 16:06:20.145683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.995 [2024-12-12 16:06:20.326236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.254 [2024-12-12 16:06:20.470505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.513 [2024-12-12 16:06:20.718411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.513 [2024-12-12 16:06:20.718470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.773 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.773 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.773 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.773 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.773 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.773 [2024-12-12 16:06:20.975121] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.773 [2024-12-12 16:06:20.975188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.773 [2024-12-12 16:06:20.975199] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.774 [2024-12-12 16:06:20.975209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.774 [2024-12-12 16:06:20.975215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.774 [2024-12-12 16:06:20.975224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.774 16:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.774 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.774 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.774 "name": "Existed_Raid", 00:09:54.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.774 "strip_size_kb": 0, 00:09:54.774 "state": "configuring", 00:09:54.774 "raid_level": "raid1", 00:09:54.774 "superblock": false, 00:09:54.774 "num_base_bdevs": 3, 00:09:54.774 "num_base_bdevs_discovered": 0, 00:09:54.774 "num_base_bdevs_operational": 3, 00:09:54.774 "base_bdevs_list": [ 00:09:54.774 { 00:09:54.774 "name": "BaseBdev1", 00:09:54.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.774 "is_configured": false, 00:09:54.774 "data_offset": 0, 00:09:54.774 "data_size": 0 00:09:54.774 }, 00:09:54.774 { 00:09:54.774 "name": "BaseBdev2", 00:09:54.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.774 "is_configured": false, 00:09:54.774 "data_offset": 0, 00:09:54.774 "data_size": 0 00:09:54.774 }, 00:09:54.774 { 00:09:54.774 "name": "BaseBdev3", 00:09:54.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.774 "is_configured": false, 00:09:54.774 "data_offset": 0, 00:09:54.774 "data_size": 0 00:09:54.774 } 00:09:54.774 ] 00:09:54.774 }' 00:09:54.774 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.774 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.033 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.033 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.033 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.033 [2024-12-12 16:06:21.370491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.033 [2024-12-12 16:06:21.370633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:55.033 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.033 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.033 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.033 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.033 [2024-12-12 16:06:21.382414] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.033 [2024-12-12 16:06:21.382468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.033 [2024-12-12 16:06:21.382478] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.033 [2024-12-12 16:06:21.382488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.033 [2024-12-12 16:06:21.382495] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.033 [2024-12-12 16:06:21.382506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.293 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.293 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.293 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.293 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.294 [2024-12-12 16:06:21.439076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.294 BaseBdev1 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.294 [ 00:09:55.294 { 00:09:55.294 "name": "BaseBdev1", 00:09:55.294 "aliases": [ 00:09:55.294 "dff9d3f4-a441-416e-9c3b-ee83b674adb6" 00:09:55.294 ], 00:09:55.294 "product_name": "Malloc disk", 00:09:55.294 "block_size": 512, 00:09:55.294 "num_blocks": 65536, 00:09:55.294 "uuid": "dff9d3f4-a441-416e-9c3b-ee83b674adb6", 00:09:55.294 "assigned_rate_limits": { 00:09:55.294 "rw_ios_per_sec": 0, 00:09:55.294 "rw_mbytes_per_sec": 0, 00:09:55.294 "r_mbytes_per_sec": 0, 00:09:55.294 "w_mbytes_per_sec": 0 00:09:55.294 }, 00:09:55.294 "claimed": true, 00:09:55.294 "claim_type": "exclusive_write", 00:09:55.294 "zoned": false, 00:09:55.294 "supported_io_types": { 00:09:55.294 "read": true, 00:09:55.294 "write": true, 00:09:55.294 "unmap": true, 00:09:55.294 "flush": true, 00:09:55.294 "reset": true, 00:09:55.294 "nvme_admin": false, 00:09:55.294 "nvme_io": false, 00:09:55.294 "nvme_io_md": false, 00:09:55.294 "write_zeroes": true, 00:09:55.294 "zcopy": true, 00:09:55.294 "get_zone_info": false, 00:09:55.294 "zone_management": false, 00:09:55.294 "zone_append": false, 00:09:55.294 "compare": false, 00:09:55.294 "compare_and_write": false, 00:09:55.294 "abort": true, 00:09:55.294 "seek_hole": false, 00:09:55.294 "seek_data": false, 00:09:55.294 "copy": true, 00:09:55.294 "nvme_iov_md": false 00:09:55.294 }, 00:09:55.294 "memory_domains": [ 00:09:55.294 { 00:09:55.294 "dma_device_id": "system", 00:09:55.294 "dma_device_type": 1 00:09:55.294 }, 00:09:55.294 { 00:09:55.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.294 "dma_device_type": 2 00:09:55.294 } 00:09:55.294 ], 00:09:55.294 "driver_specific": {} 00:09:55.294 } 00:09:55.294 ] 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.294 "name": "Existed_Raid", 00:09:55.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.294 "strip_size_kb": 0, 00:09:55.294 "state": "configuring", 00:09:55.294 "raid_level": "raid1", 00:09:55.294 "superblock": false, 00:09:55.294 "num_base_bdevs": 3, 00:09:55.294 "num_base_bdevs_discovered": 1, 00:09:55.294 "num_base_bdevs_operational": 3, 00:09:55.294 "base_bdevs_list": [ 00:09:55.294 { 00:09:55.294 "name": "BaseBdev1", 00:09:55.294 "uuid": "dff9d3f4-a441-416e-9c3b-ee83b674adb6", 00:09:55.294 "is_configured": true, 00:09:55.294 "data_offset": 0, 00:09:55.294 "data_size": 65536 00:09:55.294 }, 00:09:55.294 { 00:09:55.294 "name": "BaseBdev2", 00:09:55.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.294 "is_configured": false, 00:09:55.294 "data_offset": 0, 00:09:55.294 "data_size": 0 00:09:55.294 }, 00:09:55.294 { 00:09:55.294 "name": "BaseBdev3", 00:09:55.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.294 "is_configured": false, 00:09:55.294 "data_offset": 0, 00:09:55.294 "data_size": 0 00:09:55.294 } 00:09:55.294 ] 00:09:55.294 }' 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.294 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.864 [2024-12-12 16:06:21.914340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.864 [2024-12-12 16:06:21.914426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.864 [2024-12-12 16:06:21.922335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.864 [2024-12-12 16:06:21.924554] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.864 [2024-12-12 16:06:21.924682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.864 [2024-12-12 16:06:21.924697] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.864 [2024-12-12 16:06:21.924707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.864 "name": "Existed_Raid", 00:09:55.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.864 "strip_size_kb": 0, 00:09:55.864 "state": "configuring", 00:09:55.864 "raid_level": "raid1", 00:09:55.864 "superblock": false, 00:09:55.864 "num_base_bdevs": 3, 00:09:55.864 "num_base_bdevs_discovered": 1, 00:09:55.864 "num_base_bdevs_operational": 3, 00:09:55.864 "base_bdevs_list": [ 00:09:55.864 { 00:09:55.864 "name": "BaseBdev1", 00:09:55.864 "uuid": "dff9d3f4-a441-416e-9c3b-ee83b674adb6", 00:09:55.864 "is_configured": true, 00:09:55.864 "data_offset": 0, 00:09:55.864 "data_size": 65536 00:09:55.864 }, 00:09:55.864 { 00:09:55.864 "name": "BaseBdev2", 00:09:55.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.864 "is_configured": false, 00:09:55.864 "data_offset": 0, 00:09:55.864 "data_size": 0 00:09:55.864 }, 00:09:55.864 { 00:09:55.864 "name": "BaseBdev3", 00:09:55.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.864 "is_configured": false, 00:09:55.864 "data_offset": 0, 00:09:55.864 "data_size": 0 00:09:55.864 } 00:09:55.864 ] 00:09:55.864 }' 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.864 16:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.124 [2024-12-12 16:06:22.393886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.124 BaseBdev2 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.124 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.124 [ 00:09:56.124 { 00:09:56.124 "name": "BaseBdev2", 00:09:56.124 "aliases": [ 00:09:56.125 "221f727c-2be9-420e-a1e3-348c91e4312a" 00:09:56.125 ], 00:09:56.125 "product_name": "Malloc disk", 00:09:56.125 "block_size": 512, 00:09:56.125 "num_blocks": 65536, 00:09:56.125 "uuid": "221f727c-2be9-420e-a1e3-348c91e4312a", 00:09:56.125 "assigned_rate_limits": { 00:09:56.125 "rw_ios_per_sec": 0, 00:09:56.125 "rw_mbytes_per_sec": 0, 00:09:56.125 "r_mbytes_per_sec": 0, 00:09:56.125 "w_mbytes_per_sec": 0 00:09:56.125 }, 00:09:56.125 "claimed": true, 00:09:56.125 "claim_type": "exclusive_write", 00:09:56.125 "zoned": false, 00:09:56.125 "supported_io_types": { 00:09:56.125 "read": true, 00:09:56.125 "write": true, 00:09:56.125 "unmap": true, 00:09:56.125 "flush": true, 00:09:56.125 "reset": true, 00:09:56.125 "nvme_admin": false, 00:09:56.125 "nvme_io": false, 00:09:56.125 "nvme_io_md": false, 00:09:56.125 "write_zeroes": true, 00:09:56.125 "zcopy": true, 00:09:56.125 "get_zone_info": false, 00:09:56.125 "zone_management": false, 00:09:56.125 "zone_append": false, 00:09:56.125 "compare": false, 00:09:56.125 "compare_and_write": false, 00:09:56.125 "abort": true, 00:09:56.125 "seek_hole": false, 00:09:56.125 "seek_data": false, 00:09:56.125 "copy": true, 00:09:56.125 "nvme_iov_md": false 00:09:56.125 }, 00:09:56.125 "memory_domains": [ 00:09:56.125 { 00:09:56.125 "dma_device_id": "system", 00:09:56.125 "dma_device_type": 1 00:09:56.125 }, 00:09:56.125 { 00:09:56.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.125 "dma_device_type": 2 00:09:56.125 } 00:09:56.125 ], 00:09:56.125 "driver_specific": {} 00:09:56.125 } 00:09:56.125 ] 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.125 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.385 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.385 "name": "Existed_Raid", 00:09:56.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.385 "strip_size_kb": 0, 00:09:56.385 "state": "configuring", 00:09:56.385 "raid_level": "raid1", 00:09:56.385 "superblock": false, 00:09:56.385 "num_base_bdevs": 3, 00:09:56.385 "num_base_bdevs_discovered": 2, 00:09:56.385 "num_base_bdevs_operational": 3, 00:09:56.385 "base_bdevs_list": [ 00:09:56.385 { 00:09:56.385 "name": "BaseBdev1", 00:09:56.385 "uuid": "dff9d3f4-a441-416e-9c3b-ee83b674adb6", 00:09:56.385 "is_configured": true, 00:09:56.385 "data_offset": 0, 00:09:56.385 "data_size": 65536 00:09:56.385 }, 00:09:56.385 { 00:09:56.385 "name": "BaseBdev2", 00:09:56.385 "uuid": "221f727c-2be9-420e-a1e3-348c91e4312a", 00:09:56.385 "is_configured": true, 00:09:56.385 "data_offset": 0, 00:09:56.385 "data_size": 65536 00:09:56.385 }, 00:09:56.385 { 00:09:56.385 "name": "BaseBdev3", 00:09:56.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.385 "is_configured": false, 00:09:56.385 "data_offset": 0, 00:09:56.385 "data_size": 0 00:09:56.385 } 00:09:56.385 ] 00:09:56.385 }' 00:09:56.385 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.385 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.644 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.644 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.644 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.644 [2024-12-12 16:06:22.893954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.644 [2024-12-12 16:06:22.894101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.644 [2024-12-12 16:06:22.894137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:56.644 [2024-12-12 16:06:22.894476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:56.644 [2024-12-12 16:06:22.894711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.645 [2024-12-12 16:06:22.894751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:56.645 [2024-12-12 16:06:22.895116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.645 BaseBdev3 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.645 [ 00:09:56.645 { 00:09:56.645 "name": "BaseBdev3", 00:09:56.645 "aliases": [ 00:09:56.645 "678f9b5b-c98a-44d9-a8a5-55e9d78277c2" 00:09:56.645 ], 00:09:56.645 "product_name": "Malloc disk", 00:09:56.645 "block_size": 512, 00:09:56.645 "num_blocks": 65536, 00:09:56.645 "uuid": "678f9b5b-c98a-44d9-a8a5-55e9d78277c2", 00:09:56.645 "assigned_rate_limits": { 00:09:56.645 "rw_ios_per_sec": 0, 00:09:56.645 "rw_mbytes_per_sec": 0, 00:09:56.645 "r_mbytes_per_sec": 0, 00:09:56.645 "w_mbytes_per_sec": 0 00:09:56.645 }, 00:09:56.645 "claimed": true, 00:09:56.645 "claim_type": "exclusive_write", 00:09:56.645 "zoned": false, 00:09:56.645 "supported_io_types": { 00:09:56.645 "read": true, 00:09:56.645 "write": true, 00:09:56.645 "unmap": true, 00:09:56.645 "flush": true, 00:09:56.645 "reset": true, 00:09:56.645 "nvme_admin": false, 00:09:56.645 "nvme_io": false, 00:09:56.645 "nvme_io_md": false, 00:09:56.645 "write_zeroes": true, 00:09:56.645 "zcopy": true, 00:09:56.645 "get_zone_info": false, 00:09:56.645 "zone_management": false, 00:09:56.645 "zone_append": false, 00:09:56.645 "compare": false, 00:09:56.645 "compare_and_write": false, 00:09:56.645 "abort": true, 00:09:56.645 "seek_hole": false, 00:09:56.645 "seek_data": false, 00:09:56.645 "copy": true, 00:09:56.645 "nvme_iov_md": false 00:09:56.645 }, 00:09:56.645 "memory_domains": [ 00:09:56.645 { 00:09:56.645 "dma_device_id": "system", 00:09:56.645 "dma_device_type": 1 00:09:56.645 }, 00:09:56.645 { 00:09:56.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.645 "dma_device_type": 2 00:09:56.645 } 00:09:56.645 ], 00:09:56.645 "driver_specific": {} 00:09:56.645 } 00:09:56.645 ] 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.645 "name": "Existed_Raid", 00:09:56.645 "uuid": "726760c4-3b5a-43e3-95ce-0921ef1e9cca", 00:09:56.645 "strip_size_kb": 0, 00:09:56.645 "state": "online", 00:09:56.645 "raid_level": "raid1", 00:09:56.645 "superblock": false, 00:09:56.645 "num_base_bdevs": 3, 00:09:56.645 "num_base_bdevs_discovered": 3, 00:09:56.645 "num_base_bdevs_operational": 3, 00:09:56.645 "base_bdevs_list": [ 00:09:56.645 { 00:09:56.645 "name": "BaseBdev1", 00:09:56.645 "uuid": "dff9d3f4-a441-416e-9c3b-ee83b674adb6", 00:09:56.645 "is_configured": true, 00:09:56.645 "data_offset": 0, 00:09:56.645 "data_size": 65536 00:09:56.645 }, 00:09:56.645 { 00:09:56.645 "name": "BaseBdev2", 00:09:56.645 "uuid": "221f727c-2be9-420e-a1e3-348c91e4312a", 00:09:56.645 "is_configured": true, 00:09:56.645 "data_offset": 0, 00:09:56.645 "data_size": 65536 00:09:56.645 }, 00:09:56.645 { 00:09:56.645 "name": "BaseBdev3", 00:09:56.645 "uuid": "678f9b5b-c98a-44d9-a8a5-55e9d78277c2", 00:09:56.645 "is_configured": true, 00:09:56.645 "data_offset": 0, 00:09:56.645 "data_size": 65536 00:09:56.645 } 00:09:56.645 ] 00:09:56.645 }' 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.645 16:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.214 [2024-12-12 16:06:23.353458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.214 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.214 "name": "Existed_Raid", 00:09:57.214 "aliases": [ 00:09:57.214 "726760c4-3b5a-43e3-95ce-0921ef1e9cca" 00:09:57.214 ], 00:09:57.214 "product_name": "Raid Volume", 00:09:57.214 "block_size": 512, 00:09:57.214 "num_blocks": 65536, 00:09:57.214 "uuid": "726760c4-3b5a-43e3-95ce-0921ef1e9cca", 00:09:57.214 "assigned_rate_limits": { 00:09:57.215 "rw_ios_per_sec": 0, 00:09:57.215 "rw_mbytes_per_sec": 0, 00:09:57.215 "r_mbytes_per_sec": 0, 00:09:57.215 "w_mbytes_per_sec": 0 00:09:57.215 }, 00:09:57.215 "claimed": false, 00:09:57.215 "zoned": false, 00:09:57.215 "supported_io_types": { 00:09:57.215 "read": true, 00:09:57.215 "write": true, 00:09:57.215 "unmap": false, 00:09:57.215 "flush": false, 00:09:57.215 "reset": true, 00:09:57.215 "nvme_admin": false, 00:09:57.215 "nvme_io": false, 00:09:57.215 "nvme_io_md": false, 00:09:57.215 "write_zeroes": true, 00:09:57.215 "zcopy": false, 00:09:57.215 "get_zone_info": false, 00:09:57.215 "zone_management": false, 00:09:57.215 "zone_append": false, 00:09:57.215 "compare": false, 00:09:57.215 "compare_and_write": false, 00:09:57.215 "abort": false, 00:09:57.215 "seek_hole": false, 00:09:57.215 "seek_data": false, 00:09:57.215 "copy": false, 00:09:57.215 "nvme_iov_md": false 00:09:57.215 }, 00:09:57.215 "memory_domains": [ 00:09:57.215 { 00:09:57.215 "dma_device_id": "system", 00:09:57.215 "dma_device_type": 1 00:09:57.215 }, 00:09:57.215 { 00:09:57.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.215 "dma_device_type": 2 00:09:57.215 }, 00:09:57.215 { 00:09:57.215 "dma_device_id": "system", 00:09:57.215 "dma_device_type": 1 00:09:57.215 }, 00:09:57.215 { 00:09:57.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.215 "dma_device_type": 2 00:09:57.215 }, 00:09:57.215 { 00:09:57.215 "dma_device_id": "system", 00:09:57.215 "dma_device_type": 1 00:09:57.215 }, 00:09:57.215 { 00:09:57.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.215 "dma_device_type": 2 00:09:57.215 } 00:09:57.215 ], 00:09:57.215 "driver_specific": { 00:09:57.215 "raid": { 00:09:57.215 "uuid": "726760c4-3b5a-43e3-95ce-0921ef1e9cca", 00:09:57.215 "strip_size_kb": 0, 00:09:57.215 "state": "online", 00:09:57.215 "raid_level": "raid1", 00:09:57.215 "superblock": false, 00:09:57.215 "num_base_bdevs": 3, 00:09:57.215 "num_base_bdevs_discovered": 3, 00:09:57.215 "num_base_bdevs_operational": 3, 00:09:57.215 "base_bdevs_list": [ 00:09:57.215 { 00:09:57.215 "name": "BaseBdev1", 00:09:57.215 "uuid": "dff9d3f4-a441-416e-9c3b-ee83b674adb6", 00:09:57.215 "is_configured": true, 00:09:57.215 "data_offset": 0, 00:09:57.215 "data_size": 65536 00:09:57.215 }, 00:09:57.215 { 00:09:57.215 "name": "BaseBdev2", 00:09:57.215 "uuid": "221f727c-2be9-420e-a1e3-348c91e4312a", 00:09:57.215 "is_configured": true, 00:09:57.215 "data_offset": 0, 00:09:57.215 "data_size": 65536 00:09:57.215 }, 00:09:57.215 { 00:09:57.215 "name": "BaseBdev3", 00:09:57.215 "uuid": "678f9b5b-c98a-44d9-a8a5-55e9d78277c2", 00:09:57.215 "is_configured": true, 00:09:57.215 "data_offset": 0, 00:09:57.215 "data_size": 65536 00:09:57.215 } 00:09:57.215 ] 00:09:57.215 } 00:09:57.215 } 00:09:57.215 }' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.215 BaseBdev2 00:09:57.215 BaseBdev3' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.215 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.475 [2024-12-12 16:06:23.620825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.475 "name": "Existed_Raid", 00:09:57.475 "uuid": "726760c4-3b5a-43e3-95ce-0921ef1e9cca", 00:09:57.475 "strip_size_kb": 0, 00:09:57.475 "state": "online", 00:09:57.475 "raid_level": "raid1", 00:09:57.475 "superblock": false, 00:09:57.475 "num_base_bdevs": 3, 00:09:57.475 "num_base_bdevs_discovered": 2, 00:09:57.475 "num_base_bdevs_operational": 2, 00:09:57.475 "base_bdevs_list": [ 00:09:57.475 { 00:09:57.475 "name": null, 00:09:57.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.475 "is_configured": false, 00:09:57.475 "data_offset": 0, 00:09:57.475 "data_size": 65536 00:09:57.475 }, 00:09:57.475 { 00:09:57.475 "name": "BaseBdev2", 00:09:57.475 "uuid": "221f727c-2be9-420e-a1e3-348c91e4312a", 00:09:57.475 "is_configured": true, 00:09:57.475 "data_offset": 0, 00:09:57.475 "data_size": 65536 00:09:57.475 }, 00:09:57.475 { 00:09:57.475 "name": "BaseBdev3", 00:09:57.475 "uuid": "678f9b5b-c98a-44d9-a8a5-55e9d78277c2", 00:09:57.475 "is_configured": true, 00:09:57.475 "data_offset": 0, 00:09:57.475 "data_size": 65536 00:09:57.475 } 00:09:57.475 ] 00:09:57.475 }' 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.475 16:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.044 [2024-12-12 16:06:24.182069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.044 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.045 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.045 [2024-12-12 16:06:24.348031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.045 [2024-12-12 16:06:24.348156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.305 [2024-12-12 16:06:24.454601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.305 [2024-12-12 16:06:24.454767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.305 [2024-12-12 16:06:24.454787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.305 BaseBdev2 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.305 [ 00:09:58.305 { 00:09:58.305 "name": "BaseBdev2", 00:09:58.305 "aliases": [ 00:09:58.305 "2af2dcc1-42ec-451d-9d0d-57649a10c967" 00:09:58.305 ], 00:09:58.305 "product_name": "Malloc disk", 00:09:58.305 "block_size": 512, 00:09:58.305 "num_blocks": 65536, 00:09:58.305 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:09:58.305 "assigned_rate_limits": { 00:09:58.305 "rw_ios_per_sec": 0, 00:09:58.305 "rw_mbytes_per_sec": 0, 00:09:58.305 "r_mbytes_per_sec": 0, 00:09:58.305 "w_mbytes_per_sec": 0 00:09:58.305 }, 00:09:58.305 "claimed": false, 00:09:58.305 "zoned": false, 00:09:58.305 "supported_io_types": { 00:09:58.305 "read": true, 00:09:58.305 "write": true, 00:09:58.305 "unmap": true, 00:09:58.305 "flush": true, 00:09:58.305 "reset": true, 00:09:58.305 "nvme_admin": false, 00:09:58.305 "nvme_io": false, 00:09:58.305 "nvme_io_md": false, 00:09:58.305 "write_zeroes": true, 00:09:58.305 "zcopy": true, 00:09:58.305 "get_zone_info": false, 00:09:58.305 "zone_management": false, 00:09:58.305 "zone_append": false, 00:09:58.305 "compare": false, 00:09:58.305 "compare_and_write": false, 00:09:58.305 "abort": true, 00:09:58.305 "seek_hole": false, 00:09:58.305 "seek_data": false, 00:09:58.305 "copy": true, 00:09:58.305 "nvme_iov_md": false 00:09:58.305 }, 00:09:58.305 "memory_domains": [ 00:09:58.305 { 00:09:58.305 "dma_device_id": "system", 00:09:58.305 "dma_device_type": 1 00:09:58.305 }, 00:09:58.305 { 00:09:58.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.305 "dma_device_type": 2 00:09:58.305 } 00:09:58.305 ], 00:09:58.305 "driver_specific": {} 00:09:58.305 } 00:09:58.305 ] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.305 BaseBdev3 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.305 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.565 [ 00:09:58.565 { 00:09:58.565 "name": "BaseBdev3", 00:09:58.565 "aliases": [ 00:09:58.565 "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f" 00:09:58.565 ], 00:09:58.565 "product_name": "Malloc disk", 00:09:58.565 "block_size": 512, 00:09:58.565 "num_blocks": 65536, 00:09:58.565 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:09:58.565 "assigned_rate_limits": { 00:09:58.565 "rw_ios_per_sec": 0, 00:09:58.565 "rw_mbytes_per_sec": 0, 00:09:58.565 "r_mbytes_per_sec": 0, 00:09:58.565 "w_mbytes_per_sec": 0 00:09:58.565 }, 00:09:58.565 "claimed": false, 00:09:58.565 "zoned": false, 00:09:58.565 "supported_io_types": { 00:09:58.565 "read": true, 00:09:58.565 "write": true, 00:09:58.565 "unmap": true, 00:09:58.565 "flush": true, 00:09:58.565 "reset": true, 00:09:58.565 "nvme_admin": false, 00:09:58.565 "nvme_io": false, 00:09:58.565 "nvme_io_md": false, 00:09:58.565 "write_zeroes": true, 00:09:58.565 "zcopy": true, 00:09:58.565 "get_zone_info": false, 00:09:58.565 "zone_management": false, 00:09:58.565 "zone_append": false, 00:09:58.565 "compare": false, 00:09:58.565 "compare_and_write": false, 00:09:58.565 "abort": true, 00:09:58.565 "seek_hole": false, 00:09:58.565 "seek_data": false, 00:09:58.565 "copy": true, 00:09:58.565 "nvme_iov_md": false 00:09:58.565 }, 00:09:58.565 "memory_domains": [ 00:09:58.565 { 00:09:58.565 "dma_device_id": "system", 00:09:58.565 "dma_device_type": 1 00:09:58.565 }, 00:09:58.565 { 00:09:58.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.565 "dma_device_type": 2 00:09:58.565 } 00:09:58.565 ], 00:09:58.565 "driver_specific": {} 00:09:58.565 } 00:09:58.565 ] 00:09:58.565 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.565 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.566 [2024-12-12 16:06:24.680726] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.566 [2024-12-12 16:06:24.680858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.566 [2024-12-12 16:06:24.680921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.566 [2024-12-12 16:06:24.683044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.566 "name": "Existed_Raid", 00:09:58.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.566 "strip_size_kb": 0, 00:09:58.566 "state": "configuring", 00:09:58.566 "raid_level": "raid1", 00:09:58.566 "superblock": false, 00:09:58.566 "num_base_bdevs": 3, 00:09:58.566 "num_base_bdevs_discovered": 2, 00:09:58.566 "num_base_bdevs_operational": 3, 00:09:58.566 "base_bdevs_list": [ 00:09:58.566 { 00:09:58.566 "name": "BaseBdev1", 00:09:58.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.566 "is_configured": false, 00:09:58.566 "data_offset": 0, 00:09:58.566 "data_size": 0 00:09:58.566 }, 00:09:58.566 { 00:09:58.566 "name": "BaseBdev2", 00:09:58.566 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:09:58.566 "is_configured": true, 00:09:58.566 "data_offset": 0, 00:09:58.566 "data_size": 65536 00:09:58.566 }, 00:09:58.566 { 00:09:58.566 "name": "BaseBdev3", 00:09:58.566 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:09:58.566 "is_configured": true, 00:09:58.566 "data_offset": 0, 00:09:58.566 "data_size": 65536 00:09:58.566 } 00:09:58.566 ] 00:09:58.566 }' 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.566 16:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.826 [2024-12-12 16:06:25.112097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.826 "name": "Existed_Raid", 00:09:58.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.826 "strip_size_kb": 0, 00:09:58.826 "state": "configuring", 00:09:58.826 "raid_level": "raid1", 00:09:58.826 "superblock": false, 00:09:58.826 "num_base_bdevs": 3, 00:09:58.826 "num_base_bdevs_discovered": 1, 00:09:58.826 "num_base_bdevs_operational": 3, 00:09:58.826 "base_bdevs_list": [ 00:09:58.826 { 00:09:58.826 "name": "BaseBdev1", 00:09:58.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.826 "is_configured": false, 00:09:58.826 "data_offset": 0, 00:09:58.826 "data_size": 0 00:09:58.826 }, 00:09:58.826 { 00:09:58.826 "name": null, 00:09:58.826 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:09:58.826 "is_configured": false, 00:09:58.826 "data_offset": 0, 00:09:58.826 "data_size": 65536 00:09:58.826 }, 00:09:58.826 { 00:09:58.826 "name": "BaseBdev3", 00:09:58.826 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:09:58.826 "is_configured": true, 00:09:58.826 "data_offset": 0, 00:09:58.826 "data_size": 65536 00:09:58.826 } 00:09:58.826 ] 00:09:58.826 }' 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.826 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 [2024-12-12 16:06:25.680375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.399 BaseBdev1 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 [ 00:09:59.399 { 00:09:59.399 "name": "BaseBdev1", 00:09:59.399 "aliases": [ 00:09:59.399 "464423fe-a24c-4510-b029-05e9396c4bf6" 00:09:59.399 ], 00:09:59.399 "product_name": "Malloc disk", 00:09:59.399 "block_size": 512, 00:09:59.399 "num_blocks": 65536, 00:09:59.399 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:09:59.399 "assigned_rate_limits": { 00:09:59.399 "rw_ios_per_sec": 0, 00:09:59.399 "rw_mbytes_per_sec": 0, 00:09:59.399 "r_mbytes_per_sec": 0, 00:09:59.399 "w_mbytes_per_sec": 0 00:09:59.399 }, 00:09:59.399 "claimed": true, 00:09:59.399 "claim_type": "exclusive_write", 00:09:59.399 "zoned": false, 00:09:59.399 "supported_io_types": { 00:09:59.399 "read": true, 00:09:59.399 "write": true, 00:09:59.399 "unmap": true, 00:09:59.399 "flush": true, 00:09:59.399 "reset": true, 00:09:59.399 "nvme_admin": false, 00:09:59.399 "nvme_io": false, 00:09:59.399 "nvme_io_md": false, 00:09:59.399 "write_zeroes": true, 00:09:59.399 "zcopy": true, 00:09:59.399 "get_zone_info": false, 00:09:59.399 "zone_management": false, 00:09:59.399 "zone_append": false, 00:09:59.399 "compare": false, 00:09:59.399 "compare_and_write": false, 00:09:59.399 "abort": true, 00:09:59.399 "seek_hole": false, 00:09:59.399 "seek_data": false, 00:09:59.399 "copy": true, 00:09:59.399 "nvme_iov_md": false 00:09:59.399 }, 00:09:59.399 "memory_domains": [ 00:09:59.399 { 00:09:59.399 "dma_device_id": "system", 00:09:59.399 "dma_device_type": 1 00:09:59.399 }, 00:09:59.399 { 00:09:59.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.399 "dma_device_type": 2 00:09:59.399 } 00:09:59.399 ], 00:09:59.399 "driver_specific": {} 00:09:59.399 } 00:09:59.399 ] 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.399 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.400 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.673 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.673 "name": "Existed_Raid", 00:09:59.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.673 "strip_size_kb": 0, 00:09:59.673 "state": "configuring", 00:09:59.673 "raid_level": "raid1", 00:09:59.673 "superblock": false, 00:09:59.673 "num_base_bdevs": 3, 00:09:59.673 "num_base_bdevs_discovered": 2, 00:09:59.673 "num_base_bdevs_operational": 3, 00:09:59.673 "base_bdevs_list": [ 00:09:59.673 { 00:09:59.673 "name": "BaseBdev1", 00:09:59.673 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:09:59.673 "is_configured": true, 00:09:59.673 "data_offset": 0, 00:09:59.673 "data_size": 65536 00:09:59.673 }, 00:09:59.673 { 00:09:59.673 "name": null, 00:09:59.673 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:09:59.673 "is_configured": false, 00:09:59.673 "data_offset": 0, 00:09:59.673 "data_size": 65536 00:09:59.673 }, 00:09:59.673 { 00:09:59.673 "name": "BaseBdev3", 00:09:59.673 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:09:59.673 "is_configured": true, 00:09:59.673 "data_offset": 0, 00:09:59.673 "data_size": 65536 00:09:59.673 } 00:09:59.673 ] 00:09:59.673 }' 00:09:59.673 16:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.673 16:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.946 [2024-12-12 16:06:26.195592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.946 "name": "Existed_Raid", 00:09:59.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.946 "strip_size_kb": 0, 00:09:59.946 "state": "configuring", 00:09:59.946 "raid_level": "raid1", 00:09:59.946 "superblock": false, 00:09:59.946 "num_base_bdevs": 3, 00:09:59.946 "num_base_bdevs_discovered": 1, 00:09:59.946 "num_base_bdevs_operational": 3, 00:09:59.946 "base_bdevs_list": [ 00:09:59.946 { 00:09:59.946 "name": "BaseBdev1", 00:09:59.946 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:09:59.946 "is_configured": true, 00:09:59.946 "data_offset": 0, 00:09:59.946 "data_size": 65536 00:09:59.946 }, 00:09:59.946 { 00:09:59.946 "name": null, 00:09:59.946 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:09:59.946 "is_configured": false, 00:09:59.946 "data_offset": 0, 00:09:59.946 "data_size": 65536 00:09:59.946 }, 00:09:59.946 { 00:09:59.946 "name": null, 00:09:59.946 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:09:59.946 "is_configured": false, 00:09:59.946 "data_offset": 0, 00:09:59.946 "data_size": 65536 00:09:59.946 } 00:09:59.946 ] 00:09:59.946 }' 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.946 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.516 [2024-12-12 16:06:26.702775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.516 "name": "Existed_Raid", 00:10:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.516 "strip_size_kb": 0, 00:10:00.516 "state": "configuring", 00:10:00.516 "raid_level": "raid1", 00:10:00.516 "superblock": false, 00:10:00.516 "num_base_bdevs": 3, 00:10:00.516 "num_base_bdevs_discovered": 2, 00:10:00.516 "num_base_bdevs_operational": 3, 00:10:00.516 "base_bdevs_list": [ 00:10:00.516 { 00:10:00.516 "name": "BaseBdev1", 00:10:00.516 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:10:00.516 "is_configured": true, 00:10:00.516 "data_offset": 0, 00:10:00.516 "data_size": 65536 00:10:00.516 }, 00:10:00.516 { 00:10:00.516 "name": null, 00:10:00.516 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:10:00.516 "is_configured": false, 00:10:00.516 "data_offset": 0, 00:10:00.516 "data_size": 65536 00:10:00.516 }, 00:10:00.516 { 00:10:00.516 "name": "BaseBdev3", 00:10:00.516 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:10:00.516 "is_configured": true, 00:10:00.516 "data_offset": 0, 00:10:00.516 "data_size": 65536 00:10:00.516 } 00:10:00.516 ] 00:10:00.516 }' 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.516 16:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.086 [2024-12-12 16:06:27.174005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.086 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.086 "name": "Existed_Raid", 00:10:01.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.086 "strip_size_kb": 0, 00:10:01.086 "state": "configuring", 00:10:01.086 "raid_level": "raid1", 00:10:01.086 "superblock": false, 00:10:01.086 "num_base_bdevs": 3, 00:10:01.086 "num_base_bdevs_discovered": 1, 00:10:01.086 "num_base_bdevs_operational": 3, 00:10:01.086 "base_bdevs_list": [ 00:10:01.086 { 00:10:01.086 "name": null, 00:10:01.086 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:10:01.086 "is_configured": false, 00:10:01.086 "data_offset": 0, 00:10:01.086 "data_size": 65536 00:10:01.087 }, 00:10:01.087 { 00:10:01.087 "name": null, 00:10:01.087 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:10:01.087 "is_configured": false, 00:10:01.087 "data_offset": 0, 00:10:01.087 "data_size": 65536 00:10:01.087 }, 00:10:01.087 { 00:10:01.087 "name": "BaseBdev3", 00:10:01.087 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:10:01.087 "is_configured": true, 00:10:01.087 "data_offset": 0, 00:10:01.087 "data_size": 65536 00:10:01.087 } 00:10:01.087 ] 00:10:01.087 }' 00:10:01.087 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.087 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.346 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.346 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.346 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.346 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.606 [2024-12-12 16:06:27.746656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.606 "name": "Existed_Raid", 00:10:01.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.606 "strip_size_kb": 0, 00:10:01.606 "state": "configuring", 00:10:01.606 "raid_level": "raid1", 00:10:01.606 "superblock": false, 00:10:01.606 "num_base_bdevs": 3, 00:10:01.606 "num_base_bdevs_discovered": 2, 00:10:01.606 "num_base_bdevs_operational": 3, 00:10:01.606 "base_bdevs_list": [ 00:10:01.606 { 00:10:01.606 "name": null, 00:10:01.606 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:10:01.606 "is_configured": false, 00:10:01.606 "data_offset": 0, 00:10:01.606 "data_size": 65536 00:10:01.606 }, 00:10:01.606 { 00:10:01.606 "name": "BaseBdev2", 00:10:01.606 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:10:01.606 "is_configured": true, 00:10:01.606 "data_offset": 0, 00:10:01.606 "data_size": 65536 00:10:01.606 }, 00:10:01.606 { 00:10:01.606 "name": "BaseBdev3", 00:10:01.606 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:10:01.606 "is_configured": true, 00:10:01.606 "data_offset": 0, 00:10:01.606 "data_size": 65536 00:10:01.606 } 00:10:01.606 ] 00:10:01.606 }' 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.606 16:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.866 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.866 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.866 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 464423fe-a24c-4510-b029-05e9396c4bf6 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 [2024-12-12 16:06:28.336682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:02.127 [2024-12-12 16:06:28.336741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:02.127 [2024-12-12 16:06:28.336749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:02.127 [2024-12-12 16:06:28.337066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:02.127 [2024-12-12 16:06:28.337226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:02.127 [2024-12-12 16:06:28.337239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:02.127 [2024-12-12 16:06:28.337510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.127 NewBaseBdev 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 [ 00:10:02.127 { 00:10:02.127 "name": "NewBaseBdev", 00:10:02.127 "aliases": [ 00:10:02.127 "464423fe-a24c-4510-b029-05e9396c4bf6" 00:10:02.127 ], 00:10:02.127 "product_name": "Malloc disk", 00:10:02.127 "block_size": 512, 00:10:02.127 "num_blocks": 65536, 00:10:02.127 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:10:02.127 "assigned_rate_limits": { 00:10:02.127 "rw_ios_per_sec": 0, 00:10:02.127 "rw_mbytes_per_sec": 0, 00:10:02.127 "r_mbytes_per_sec": 0, 00:10:02.127 "w_mbytes_per_sec": 0 00:10:02.127 }, 00:10:02.127 "claimed": true, 00:10:02.127 "claim_type": "exclusive_write", 00:10:02.127 "zoned": false, 00:10:02.127 "supported_io_types": { 00:10:02.127 "read": true, 00:10:02.127 "write": true, 00:10:02.127 "unmap": true, 00:10:02.127 "flush": true, 00:10:02.127 "reset": true, 00:10:02.127 "nvme_admin": false, 00:10:02.127 "nvme_io": false, 00:10:02.127 "nvme_io_md": false, 00:10:02.127 "write_zeroes": true, 00:10:02.127 "zcopy": true, 00:10:02.127 "get_zone_info": false, 00:10:02.127 "zone_management": false, 00:10:02.127 "zone_append": false, 00:10:02.127 "compare": false, 00:10:02.127 "compare_and_write": false, 00:10:02.127 "abort": true, 00:10:02.127 "seek_hole": false, 00:10:02.127 "seek_data": false, 00:10:02.127 "copy": true, 00:10:02.127 "nvme_iov_md": false 00:10:02.127 }, 00:10:02.127 "memory_domains": [ 00:10:02.127 { 00:10:02.127 "dma_device_id": "system", 00:10:02.127 "dma_device_type": 1 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.127 "dma_device_type": 2 00:10:02.127 } 00:10:02.127 ], 00:10:02.127 "driver_specific": {} 00:10:02.127 } 00:10:02.127 ] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.127 "name": "Existed_Raid", 00:10:02.127 "uuid": "e047e357-e1ce-41d3-a60c-da7c815072c6", 00:10:02.127 "strip_size_kb": 0, 00:10:02.127 "state": "online", 00:10:02.127 "raid_level": "raid1", 00:10:02.127 "superblock": false, 00:10:02.127 "num_base_bdevs": 3, 00:10:02.127 "num_base_bdevs_discovered": 3, 00:10:02.127 "num_base_bdevs_operational": 3, 00:10:02.127 "base_bdevs_list": [ 00:10:02.127 { 00:10:02.127 "name": "NewBaseBdev", 00:10:02.127 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:10:02.127 "is_configured": true, 00:10:02.127 "data_offset": 0, 00:10:02.127 "data_size": 65536 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "name": "BaseBdev2", 00:10:02.127 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:10:02.127 "is_configured": true, 00:10:02.127 "data_offset": 0, 00:10:02.127 "data_size": 65536 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "name": "BaseBdev3", 00:10:02.127 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:10:02.127 "is_configured": true, 00:10:02.127 "data_offset": 0, 00:10:02.127 "data_size": 65536 00:10:02.127 } 00:10:02.127 ] 00:10:02.127 }' 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.127 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.697 [2024-12-12 16:06:28.868231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.697 16:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.698 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.698 "name": "Existed_Raid", 00:10:02.698 "aliases": [ 00:10:02.698 "e047e357-e1ce-41d3-a60c-da7c815072c6" 00:10:02.698 ], 00:10:02.698 "product_name": "Raid Volume", 00:10:02.698 "block_size": 512, 00:10:02.698 "num_blocks": 65536, 00:10:02.698 "uuid": "e047e357-e1ce-41d3-a60c-da7c815072c6", 00:10:02.698 "assigned_rate_limits": { 00:10:02.698 "rw_ios_per_sec": 0, 00:10:02.698 "rw_mbytes_per_sec": 0, 00:10:02.698 "r_mbytes_per_sec": 0, 00:10:02.698 "w_mbytes_per_sec": 0 00:10:02.698 }, 00:10:02.698 "claimed": false, 00:10:02.698 "zoned": false, 00:10:02.698 "supported_io_types": { 00:10:02.698 "read": true, 00:10:02.698 "write": true, 00:10:02.698 "unmap": false, 00:10:02.698 "flush": false, 00:10:02.698 "reset": true, 00:10:02.698 "nvme_admin": false, 00:10:02.698 "nvme_io": false, 00:10:02.698 "nvme_io_md": false, 00:10:02.698 "write_zeroes": true, 00:10:02.698 "zcopy": false, 00:10:02.698 "get_zone_info": false, 00:10:02.698 "zone_management": false, 00:10:02.698 "zone_append": false, 00:10:02.698 "compare": false, 00:10:02.698 "compare_and_write": false, 00:10:02.698 "abort": false, 00:10:02.698 "seek_hole": false, 00:10:02.698 "seek_data": false, 00:10:02.698 "copy": false, 00:10:02.698 "nvme_iov_md": false 00:10:02.698 }, 00:10:02.698 "memory_domains": [ 00:10:02.698 { 00:10:02.698 "dma_device_id": "system", 00:10:02.698 "dma_device_type": 1 00:10:02.698 }, 00:10:02.698 { 00:10:02.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.698 "dma_device_type": 2 00:10:02.698 }, 00:10:02.698 { 00:10:02.698 "dma_device_id": "system", 00:10:02.698 "dma_device_type": 1 00:10:02.698 }, 00:10:02.698 { 00:10:02.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.698 "dma_device_type": 2 00:10:02.698 }, 00:10:02.698 { 00:10:02.698 "dma_device_id": "system", 00:10:02.698 "dma_device_type": 1 00:10:02.698 }, 00:10:02.698 { 00:10:02.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.698 "dma_device_type": 2 00:10:02.698 } 00:10:02.698 ], 00:10:02.698 "driver_specific": { 00:10:02.698 "raid": { 00:10:02.698 "uuid": "e047e357-e1ce-41d3-a60c-da7c815072c6", 00:10:02.698 "strip_size_kb": 0, 00:10:02.698 "state": "online", 00:10:02.698 "raid_level": "raid1", 00:10:02.698 "superblock": false, 00:10:02.698 "num_base_bdevs": 3, 00:10:02.698 "num_base_bdevs_discovered": 3, 00:10:02.698 "num_base_bdevs_operational": 3, 00:10:02.698 "base_bdevs_list": [ 00:10:02.698 { 00:10:02.698 "name": "NewBaseBdev", 00:10:02.698 "uuid": "464423fe-a24c-4510-b029-05e9396c4bf6", 00:10:02.698 "is_configured": true, 00:10:02.698 "data_offset": 0, 00:10:02.698 "data_size": 65536 00:10:02.698 }, 00:10:02.698 { 00:10:02.698 "name": "BaseBdev2", 00:10:02.698 "uuid": "2af2dcc1-42ec-451d-9d0d-57649a10c967", 00:10:02.698 "is_configured": true, 00:10:02.698 "data_offset": 0, 00:10:02.698 "data_size": 65536 00:10:02.698 }, 00:10:02.698 { 00:10:02.698 "name": "BaseBdev3", 00:10:02.698 "uuid": "c3eeda0c-af04-4c4f-bd05-3ee02f54ed6f", 00:10:02.698 "is_configured": true, 00:10:02.698 "data_offset": 0, 00:10:02.698 "data_size": 65536 00:10:02.698 } 00:10:02.698 ] 00:10:02.698 } 00:10:02.698 } 00:10:02.698 }' 00:10:02.698 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.698 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:02.698 BaseBdev2 00:10:02.698 BaseBdev3' 00:10:02.698 16:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.698 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.698 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.698 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:02.698 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.698 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.698 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.698 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.958 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.959 [2024-12-12 16:06:29.175321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.959 [2024-12-12 16:06:29.175446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.959 [2024-12-12 16:06:29.175554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.959 [2024-12-12 16:06:29.175903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.959 [2024-12-12 16:06:29.175963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69418 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69418 ']' 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69418 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69418 00:10:02.959 killing process with pid 69418 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69418' 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69418 00:10:02.959 [2024-12-12 16:06:29.222786] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.959 16:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69418 00:10:03.218 [2024-12-12 16:06:29.560303] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.598 ************************************ 00:10:04.598 END TEST raid_state_function_test 00:10:04.598 ************************************ 00:10:04.598 16:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.598 00:10:04.598 real 0m10.774s 00:10:04.598 user 0m16.853s 00:10:04.598 sys 0m1.900s 00:10:04.598 16:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.598 16:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.598 16:06:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:04.598 16:06:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.599 16:06:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.599 16:06:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.599 ************************************ 00:10:04.599 START TEST raid_state_function_test_sb 00:10:04.599 ************************************ 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70045 00:10:04.599 Process raid pid: 70045 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70045' 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70045 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70045 ']' 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.599 16:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.858 [2024-12-12 16:06:30.988326] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:04.858 [2024-12-12 16:06:30.988510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.858 [2024-12-12 16:06:31.147424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.117 [2024-12-12 16:06:31.283241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.376 [2024-12-12 16:06:31.535419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.376 [2024-12-12 16:06:31.535596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.636 [2024-12-12 16:06:31.820396] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.636 [2024-12-12 16:06:31.820575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.636 [2024-12-12 16:06:31.820591] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.636 [2024-12-12 16:06:31.820602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.636 [2024-12-12 16:06:31.820616] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.636 [2024-12-12 16:06:31.820625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.636 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.637 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.637 "name": "Existed_Raid", 00:10:05.637 "uuid": "0a07eb14-7be0-46e2-b7b0-b83a460b181e", 00:10:05.637 "strip_size_kb": 0, 00:10:05.637 "state": "configuring", 00:10:05.637 "raid_level": "raid1", 00:10:05.637 "superblock": true, 00:10:05.637 "num_base_bdevs": 3, 00:10:05.637 "num_base_bdevs_discovered": 0, 00:10:05.637 "num_base_bdevs_operational": 3, 00:10:05.637 "base_bdevs_list": [ 00:10:05.637 { 00:10:05.637 "name": "BaseBdev1", 00:10:05.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.637 "is_configured": false, 00:10:05.637 "data_offset": 0, 00:10:05.637 "data_size": 0 00:10:05.637 }, 00:10:05.637 { 00:10:05.637 "name": "BaseBdev2", 00:10:05.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.637 "is_configured": false, 00:10:05.637 "data_offset": 0, 00:10:05.637 "data_size": 0 00:10:05.637 }, 00:10:05.637 { 00:10:05.637 "name": "BaseBdev3", 00:10:05.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.637 "is_configured": false, 00:10:05.637 "data_offset": 0, 00:10:05.637 "data_size": 0 00:10:05.637 } 00:10:05.637 ] 00:10:05.637 }' 00:10:05.637 16:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.637 16:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 [2024-12-12 16:06:32.271598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.207 [2024-12-12 16:06:32.271759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 [2024-12-12 16:06:32.283560] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.207 [2024-12-12 16:06:32.283650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.207 [2024-12-12 16:06:32.283661] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.207 [2024-12-12 16:06:32.283672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.207 [2024-12-12 16:06:32.283678] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.207 [2024-12-12 16:06:32.283688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 [2024-12-12 16:06:32.341112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.207 BaseBdev1 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.207 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 [ 00:10:06.207 { 00:10:06.207 "name": "BaseBdev1", 00:10:06.207 "aliases": [ 00:10:06.207 "c7926751-a75f-45f9-aef3-d0f6e6f76146" 00:10:06.207 ], 00:10:06.207 "product_name": "Malloc disk", 00:10:06.207 "block_size": 512, 00:10:06.207 "num_blocks": 65536, 00:10:06.207 "uuid": "c7926751-a75f-45f9-aef3-d0f6e6f76146", 00:10:06.207 "assigned_rate_limits": { 00:10:06.207 "rw_ios_per_sec": 0, 00:10:06.207 "rw_mbytes_per_sec": 0, 00:10:06.207 "r_mbytes_per_sec": 0, 00:10:06.207 "w_mbytes_per_sec": 0 00:10:06.207 }, 00:10:06.207 "claimed": true, 00:10:06.207 "claim_type": "exclusive_write", 00:10:06.207 "zoned": false, 00:10:06.207 "supported_io_types": { 00:10:06.207 "read": true, 00:10:06.207 "write": true, 00:10:06.207 "unmap": true, 00:10:06.207 "flush": true, 00:10:06.207 "reset": true, 00:10:06.207 "nvme_admin": false, 00:10:06.207 "nvme_io": false, 00:10:06.207 "nvme_io_md": false, 00:10:06.207 "write_zeroes": true, 00:10:06.207 "zcopy": true, 00:10:06.207 "get_zone_info": false, 00:10:06.207 "zone_management": false, 00:10:06.207 "zone_append": false, 00:10:06.207 "compare": false, 00:10:06.207 "compare_and_write": false, 00:10:06.207 "abort": true, 00:10:06.207 "seek_hole": false, 00:10:06.207 "seek_data": false, 00:10:06.207 "copy": true, 00:10:06.207 "nvme_iov_md": false 00:10:06.207 }, 00:10:06.207 "memory_domains": [ 00:10:06.207 { 00:10:06.207 "dma_device_id": "system", 00:10:06.207 "dma_device_type": 1 00:10:06.207 }, 00:10:06.207 { 00:10:06.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.207 "dma_device_type": 2 00:10:06.207 } 00:10:06.207 ], 00:10:06.207 "driver_specific": {} 00:10:06.207 } 00:10:06.207 ] 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.208 "name": "Existed_Raid", 00:10:06.208 "uuid": "f0c0157b-0f08-4e57-94ff-f08e96a51c74", 00:10:06.208 "strip_size_kb": 0, 00:10:06.208 "state": "configuring", 00:10:06.208 "raid_level": "raid1", 00:10:06.208 "superblock": true, 00:10:06.208 "num_base_bdevs": 3, 00:10:06.208 "num_base_bdevs_discovered": 1, 00:10:06.208 "num_base_bdevs_operational": 3, 00:10:06.208 "base_bdevs_list": [ 00:10:06.208 { 00:10:06.208 "name": "BaseBdev1", 00:10:06.208 "uuid": "c7926751-a75f-45f9-aef3-d0f6e6f76146", 00:10:06.208 "is_configured": true, 00:10:06.208 "data_offset": 2048, 00:10:06.208 "data_size": 63488 00:10:06.208 }, 00:10:06.208 { 00:10:06.208 "name": "BaseBdev2", 00:10:06.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.208 "is_configured": false, 00:10:06.208 "data_offset": 0, 00:10:06.208 "data_size": 0 00:10:06.208 }, 00:10:06.208 { 00:10:06.208 "name": "BaseBdev3", 00:10:06.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.208 "is_configured": false, 00:10:06.208 "data_offset": 0, 00:10:06.208 "data_size": 0 00:10:06.208 } 00:10:06.208 ] 00:10:06.208 }' 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.208 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.468 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.468 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.729 [2024-12-12 16:06:32.824407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.729 [2024-12-12 16:06:32.824494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.729 [2024-12-12 16:06:32.836457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.729 [2024-12-12 16:06:32.838695] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.729 [2024-12-12 16:06:32.838750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.729 [2024-12-12 16:06:32.838762] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.729 [2024-12-12 16:06:32.838772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.729 "name": "Existed_Raid", 00:10:06.729 "uuid": "d14a9576-808b-44e1-ad97-319983d468b3", 00:10:06.729 "strip_size_kb": 0, 00:10:06.729 "state": "configuring", 00:10:06.729 "raid_level": "raid1", 00:10:06.729 "superblock": true, 00:10:06.729 "num_base_bdevs": 3, 00:10:06.729 "num_base_bdevs_discovered": 1, 00:10:06.729 "num_base_bdevs_operational": 3, 00:10:06.729 "base_bdevs_list": [ 00:10:06.729 { 00:10:06.729 "name": "BaseBdev1", 00:10:06.729 "uuid": "c7926751-a75f-45f9-aef3-d0f6e6f76146", 00:10:06.729 "is_configured": true, 00:10:06.729 "data_offset": 2048, 00:10:06.729 "data_size": 63488 00:10:06.729 }, 00:10:06.729 { 00:10:06.729 "name": "BaseBdev2", 00:10:06.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.729 "is_configured": false, 00:10:06.729 "data_offset": 0, 00:10:06.729 "data_size": 0 00:10:06.729 }, 00:10:06.729 { 00:10:06.729 "name": "BaseBdev3", 00:10:06.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.729 "is_configured": false, 00:10:06.729 "data_offset": 0, 00:10:06.729 "data_size": 0 00:10:06.729 } 00:10:06.729 ] 00:10:06.729 }' 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.729 16:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.988 [2024-12-12 16:06:33.299771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.988 BaseBdev2 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.988 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.989 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.989 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.989 [ 00:10:06.989 { 00:10:06.989 "name": "BaseBdev2", 00:10:06.989 "aliases": [ 00:10:06.989 "0785b6ff-f060-4719-a12c-1c103ac9e4fe" 00:10:06.989 ], 00:10:06.989 "product_name": "Malloc disk", 00:10:06.989 "block_size": 512, 00:10:06.989 "num_blocks": 65536, 00:10:06.989 "uuid": "0785b6ff-f060-4719-a12c-1c103ac9e4fe", 00:10:06.989 "assigned_rate_limits": { 00:10:06.989 "rw_ios_per_sec": 0, 00:10:06.989 "rw_mbytes_per_sec": 0, 00:10:06.989 "r_mbytes_per_sec": 0, 00:10:06.989 "w_mbytes_per_sec": 0 00:10:06.989 }, 00:10:06.989 "claimed": true, 00:10:06.989 "claim_type": "exclusive_write", 00:10:06.989 "zoned": false, 00:10:06.989 "supported_io_types": { 00:10:06.989 "read": true, 00:10:06.989 "write": true, 00:10:06.989 "unmap": true, 00:10:06.989 "flush": true, 00:10:06.989 "reset": true, 00:10:06.989 "nvme_admin": false, 00:10:06.989 "nvme_io": false, 00:10:06.989 "nvme_io_md": false, 00:10:06.989 "write_zeroes": true, 00:10:06.989 "zcopy": true, 00:10:06.989 "get_zone_info": false, 00:10:06.989 "zone_management": false, 00:10:06.989 "zone_append": false, 00:10:06.989 "compare": false, 00:10:06.989 "compare_and_write": false, 00:10:06.989 "abort": true, 00:10:06.989 "seek_hole": false, 00:10:06.989 "seek_data": false, 00:10:06.989 "copy": true, 00:10:06.989 "nvme_iov_md": false 00:10:06.989 }, 00:10:06.989 "memory_domains": [ 00:10:06.989 { 00:10:06.989 "dma_device_id": "system", 00:10:06.989 "dma_device_type": 1 00:10:06.989 }, 00:10:06.989 { 00:10:06.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.989 "dma_device_type": 2 00:10:06.989 } 00:10:06.989 ], 00:10:06.989 "driver_specific": {} 00:10:06.989 } 00:10:07.249 ] 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.249 "name": "Existed_Raid", 00:10:07.249 "uuid": "d14a9576-808b-44e1-ad97-319983d468b3", 00:10:07.249 "strip_size_kb": 0, 00:10:07.249 "state": "configuring", 00:10:07.249 "raid_level": "raid1", 00:10:07.249 "superblock": true, 00:10:07.249 "num_base_bdevs": 3, 00:10:07.249 "num_base_bdevs_discovered": 2, 00:10:07.249 "num_base_bdevs_operational": 3, 00:10:07.249 "base_bdevs_list": [ 00:10:07.249 { 00:10:07.249 "name": "BaseBdev1", 00:10:07.249 "uuid": "c7926751-a75f-45f9-aef3-d0f6e6f76146", 00:10:07.249 "is_configured": true, 00:10:07.249 "data_offset": 2048, 00:10:07.249 "data_size": 63488 00:10:07.249 }, 00:10:07.249 { 00:10:07.249 "name": "BaseBdev2", 00:10:07.249 "uuid": "0785b6ff-f060-4719-a12c-1c103ac9e4fe", 00:10:07.249 "is_configured": true, 00:10:07.249 "data_offset": 2048, 00:10:07.249 "data_size": 63488 00:10:07.249 }, 00:10:07.249 { 00:10:07.249 "name": "BaseBdev3", 00:10:07.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.249 "is_configured": false, 00:10:07.249 "data_offset": 0, 00:10:07.249 "data_size": 0 00:10:07.249 } 00:10:07.249 ] 00:10:07.249 }' 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.249 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.508 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.508 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 [2024-12-12 16:06:33.829062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.509 [2024-12-12 16:06:33.829359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.509 [2024-12-12 16:06:33.829387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:07.509 BaseBdev3 00:10:07.509 [2024-12-12 16:06:33.829693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:07.509 [2024-12-12 16:06:33.829883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.509 [2024-12-12 16:06:33.829907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:07.509 [2024-12-12 16:06:33.830083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 [ 00:10:07.509 { 00:10:07.509 "name": "BaseBdev3", 00:10:07.509 "aliases": [ 00:10:07.509 "a6ead2b6-7735-40e1-abe2-a76b72f2918c" 00:10:07.509 ], 00:10:07.769 "product_name": "Malloc disk", 00:10:07.769 "block_size": 512, 00:10:07.769 "num_blocks": 65536, 00:10:07.769 "uuid": "a6ead2b6-7735-40e1-abe2-a76b72f2918c", 00:10:07.769 "assigned_rate_limits": { 00:10:07.769 "rw_ios_per_sec": 0, 00:10:07.769 "rw_mbytes_per_sec": 0, 00:10:07.769 "r_mbytes_per_sec": 0, 00:10:07.769 "w_mbytes_per_sec": 0 00:10:07.769 }, 00:10:07.769 "claimed": true, 00:10:07.769 "claim_type": "exclusive_write", 00:10:07.769 "zoned": false, 00:10:07.769 "supported_io_types": { 00:10:07.769 "read": true, 00:10:07.769 "write": true, 00:10:07.769 "unmap": true, 00:10:07.769 "flush": true, 00:10:07.769 "reset": true, 00:10:07.769 "nvme_admin": false, 00:10:07.769 "nvme_io": false, 00:10:07.769 "nvme_io_md": false, 00:10:07.769 "write_zeroes": true, 00:10:07.769 "zcopy": true, 00:10:07.769 "get_zone_info": false, 00:10:07.769 "zone_management": false, 00:10:07.769 "zone_append": false, 00:10:07.769 "compare": false, 00:10:07.769 "compare_and_write": false, 00:10:07.769 "abort": true, 00:10:07.769 "seek_hole": false, 00:10:07.769 "seek_data": false, 00:10:07.769 "copy": true, 00:10:07.769 "nvme_iov_md": false 00:10:07.769 }, 00:10:07.769 "memory_domains": [ 00:10:07.769 { 00:10:07.769 "dma_device_id": "system", 00:10:07.769 "dma_device_type": 1 00:10:07.769 }, 00:10:07.769 { 00:10:07.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.769 "dma_device_type": 2 00:10:07.769 } 00:10:07.769 ], 00:10:07.769 "driver_specific": {} 00:10:07.769 } 00:10:07.769 ] 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.769 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.770 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.770 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.770 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.770 "name": "Existed_Raid", 00:10:07.770 "uuid": "d14a9576-808b-44e1-ad97-319983d468b3", 00:10:07.770 "strip_size_kb": 0, 00:10:07.770 "state": "online", 00:10:07.770 "raid_level": "raid1", 00:10:07.770 "superblock": true, 00:10:07.770 "num_base_bdevs": 3, 00:10:07.770 "num_base_bdevs_discovered": 3, 00:10:07.770 "num_base_bdevs_operational": 3, 00:10:07.770 "base_bdevs_list": [ 00:10:07.770 { 00:10:07.770 "name": "BaseBdev1", 00:10:07.770 "uuid": "c7926751-a75f-45f9-aef3-d0f6e6f76146", 00:10:07.770 "is_configured": true, 00:10:07.770 "data_offset": 2048, 00:10:07.770 "data_size": 63488 00:10:07.770 }, 00:10:07.770 { 00:10:07.770 "name": "BaseBdev2", 00:10:07.770 "uuid": "0785b6ff-f060-4719-a12c-1c103ac9e4fe", 00:10:07.770 "is_configured": true, 00:10:07.770 "data_offset": 2048, 00:10:07.770 "data_size": 63488 00:10:07.770 }, 00:10:07.770 { 00:10:07.770 "name": "BaseBdev3", 00:10:07.770 "uuid": "a6ead2b6-7735-40e1-abe2-a76b72f2918c", 00:10:07.770 "is_configured": true, 00:10:07.770 "data_offset": 2048, 00:10:07.770 "data_size": 63488 00:10:07.770 } 00:10:07.770 ] 00:10:07.770 }' 00:10:07.770 16:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.770 16:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.030 [2024-12-12 16:06:34.320637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.030 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.030 "name": "Existed_Raid", 00:10:08.030 "aliases": [ 00:10:08.030 "d14a9576-808b-44e1-ad97-319983d468b3" 00:10:08.030 ], 00:10:08.030 "product_name": "Raid Volume", 00:10:08.030 "block_size": 512, 00:10:08.030 "num_blocks": 63488, 00:10:08.030 "uuid": "d14a9576-808b-44e1-ad97-319983d468b3", 00:10:08.030 "assigned_rate_limits": { 00:10:08.031 "rw_ios_per_sec": 0, 00:10:08.031 "rw_mbytes_per_sec": 0, 00:10:08.031 "r_mbytes_per_sec": 0, 00:10:08.031 "w_mbytes_per_sec": 0 00:10:08.031 }, 00:10:08.031 "claimed": false, 00:10:08.031 "zoned": false, 00:10:08.031 "supported_io_types": { 00:10:08.031 "read": true, 00:10:08.031 "write": true, 00:10:08.031 "unmap": false, 00:10:08.031 "flush": false, 00:10:08.031 "reset": true, 00:10:08.031 "nvme_admin": false, 00:10:08.031 "nvme_io": false, 00:10:08.031 "nvme_io_md": false, 00:10:08.031 "write_zeroes": true, 00:10:08.031 "zcopy": false, 00:10:08.031 "get_zone_info": false, 00:10:08.031 "zone_management": false, 00:10:08.031 "zone_append": false, 00:10:08.031 "compare": false, 00:10:08.031 "compare_and_write": false, 00:10:08.031 "abort": false, 00:10:08.031 "seek_hole": false, 00:10:08.031 "seek_data": false, 00:10:08.031 "copy": false, 00:10:08.031 "nvme_iov_md": false 00:10:08.031 }, 00:10:08.031 "memory_domains": [ 00:10:08.031 { 00:10:08.031 "dma_device_id": "system", 00:10:08.031 "dma_device_type": 1 00:10:08.031 }, 00:10:08.031 { 00:10:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.031 "dma_device_type": 2 00:10:08.031 }, 00:10:08.031 { 00:10:08.031 "dma_device_id": "system", 00:10:08.031 "dma_device_type": 1 00:10:08.031 }, 00:10:08.031 { 00:10:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.031 "dma_device_type": 2 00:10:08.031 }, 00:10:08.031 { 00:10:08.031 "dma_device_id": "system", 00:10:08.031 "dma_device_type": 1 00:10:08.031 }, 00:10:08.031 { 00:10:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.031 "dma_device_type": 2 00:10:08.031 } 00:10:08.031 ], 00:10:08.031 "driver_specific": { 00:10:08.031 "raid": { 00:10:08.031 "uuid": "d14a9576-808b-44e1-ad97-319983d468b3", 00:10:08.031 "strip_size_kb": 0, 00:10:08.031 "state": "online", 00:10:08.031 "raid_level": "raid1", 00:10:08.031 "superblock": true, 00:10:08.031 "num_base_bdevs": 3, 00:10:08.031 "num_base_bdevs_discovered": 3, 00:10:08.031 "num_base_bdevs_operational": 3, 00:10:08.031 "base_bdevs_list": [ 00:10:08.031 { 00:10:08.031 "name": "BaseBdev1", 00:10:08.031 "uuid": "c7926751-a75f-45f9-aef3-d0f6e6f76146", 00:10:08.031 "is_configured": true, 00:10:08.031 "data_offset": 2048, 00:10:08.031 "data_size": 63488 00:10:08.031 }, 00:10:08.031 { 00:10:08.031 "name": "BaseBdev2", 00:10:08.031 "uuid": "0785b6ff-f060-4719-a12c-1c103ac9e4fe", 00:10:08.031 "is_configured": true, 00:10:08.031 "data_offset": 2048, 00:10:08.031 "data_size": 63488 00:10:08.031 }, 00:10:08.031 { 00:10:08.031 "name": "BaseBdev3", 00:10:08.031 "uuid": "a6ead2b6-7735-40e1-abe2-a76b72f2918c", 00:10:08.031 "is_configured": true, 00:10:08.031 "data_offset": 2048, 00:10:08.031 "data_size": 63488 00:10:08.031 } 00:10:08.031 ] 00:10:08.031 } 00:10:08.031 } 00:10:08.031 }' 00:10:08.031 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:08.293 BaseBdev2 00:10:08.293 BaseBdev3' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.293 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.293 [2024-12-12 16:06:34.579871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.553 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.554 "name": "Existed_Raid", 00:10:08.554 "uuid": "d14a9576-808b-44e1-ad97-319983d468b3", 00:10:08.554 "strip_size_kb": 0, 00:10:08.554 "state": "online", 00:10:08.554 "raid_level": "raid1", 00:10:08.554 "superblock": true, 00:10:08.554 "num_base_bdevs": 3, 00:10:08.554 "num_base_bdevs_discovered": 2, 00:10:08.554 "num_base_bdevs_operational": 2, 00:10:08.554 "base_bdevs_list": [ 00:10:08.554 { 00:10:08.554 "name": null, 00:10:08.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.554 "is_configured": false, 00:10:08.554 "data_offset": 0, 00:10:08.554 "data_size": 63488 00:10:08.554 }, 00:10:08.554 { 00:10:08.554 "name": "BaseBdev2", 00:10:08.554 "uuid": "0785b6ff-f060-4719-a12c-1c103ac9e4fe", 00:10:08.554 "is_configured": true, 00:10:08.554 "data_offset": 2048, 00:10:08.554 "data_size": 63488 00:10:08.554 }, 00:10:08.554 { 00:10:08.554 "name": "BaseBdev3", 00:10:08.554 "uuid": "a6ead2b6-7735-40e1-abe2-a76b72f2918c", 00:10:08.554 "is_configured": true, 00:10:08.554 "data_offset": 2048, 00:10:08.554 "data_size": 63488 00:10:08.554 } 00:10:08.554 ] 00:10:08.554 }' 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.554 16:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.816 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.816 [2024-12-12 16:06:35.166279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.076 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.076 [2024-12-12 16:06:35.346511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.076 [2024-12-12 16:06:35.346665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.336 [2024-12-12 16:06:35.466343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.336 [2024-12-12 16:06:35.466412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.337 [2024-12-12 16:06:35.466425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 BaseBdev2 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 [ 00:10:09.337 { 00:10:09.337 "name": "BaseBdev2", 00:10:09.337 "aliases": [ 00:10:09.337 "e0e56360-5a89-4b98-9dab-26c952702827" 00:10:09.337 ], 00:10:09.337 "product_name": "Malloc disk", 00:10:09.337 "block_size": 512, 00:10:09.337 "num_blocks": 65536, 00:10:09.337 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:09.337 "assigned_rate_limits": { 00:10:09.337 "rw_ios_per_sec": 0, 00:10:09.337 "rw_mbytes_per_sec": 0, 00:10:09.337 "r_mbytes_per_sec": 0, 00:10:09.337 "w_mbytes_per_sec": 0 00:10:09.337 }, 00:10:09.337 "claimed": false, 00:10:09.337 "zoned": false, 00:10:09.337 "supported_io_types": { 00:10:09.337 "read": true, 00:10:09.337 "write": true, 00:10:09.337 "unmap": true, 00:10:09.337 "flush": true, 00:10:09.337 "reset": true, 00:10:09.337 "nvme_admin": false, 00:10:09.337 "nvme_io": false, 00:10:09.337 "nvme_io_md": false, 00:10:09.337 "write_zeroes": true, 00:10:09.337 "zcopy": true, 00:10:09.337 "get_zone_info": false, 00:10:09.337 "zone_management": false, 00:10:09.337 "zone_append": false, 00:10:09.337 "compare": false, 00:10:09.337 "compare_and_write": false, 00:10:09.337 "abort": true, 00:10:09.337 "seek_hole": false, 00:10:09.337 "seek_data": false, 00:10:09.337 "copy": true, 00:10:09.337 "nvme_iov_md": false 00:10:09.337 }, 00:10:09.337 "memory_domains": [ 00:10:09.337 { 00:10:09.337 "dma_device_id": "system", 00:10:09.337 "dma_device_type": 1 00:10:09.337 }, 00:10:09.337 { 00:10:09.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.337 "dma_device_type": 2 00:10:09.337 } 00:10:09.337 ], 00:10:09.337 "driver_specific": {} 00:10:09.337 } 00:10:09.337 ] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 BaseBdev3 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.598 [ 00:10:09.598 { 00:10:09.598 "name": "BaseBdev3", 00:10:09.598 "aliases": [ 00:10:09.598 "969ff8f3-1371-4256-bf1a-3bea9608c0b4" 00:10:09.598 ], 00:10:09.598 "product_name": "Malloc disk", 00:10:09.598 "block_size": 512, 00:10:09.598 "num_blocks": 65536, 00:10:09.598 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:09.598 "assigned_rate_limits": { 00:10:09.598 "rw_ios_per_sec": 0, 00:10:09.598 "rw_mbytes_per_sec": 0, 00:10:09.598 "r_mbytes_per_sec": 0, 00:10:09.598 "w_mbytes_per_sec": 0 00:10:09.598 }, 00:10:09.598 "claimed": false, 00:10:09.598 "zoned": false, 00:10:09.598 "supported_io_types": { 00:10:09.598 "read": true, 00:10:09.598 "write": true, 00:10:09.598 "unmap": true, 00:10:09.598 "flush": true, 00:10:09.598 "reset": true, 00:10:09.598 "nvme_admin": false, 00:10:09.598 "nvme_io": false, 00:10:09.598 "nvme_io_md": false, 00:10:09.598 "write_zeroes": true, 00:10:09.598 "zcopy": true, 00:10:09.598 "get_zone_info": false, 00:10:09.598 "zone_management": false, 00:10:09.598 "zone_append": false, 00:10:09.598 "compare": false, 00:10:09.598 "compare_and_write": false, 00:10:09.598 "abort": true, 00:10:09.598 "seek_hole": false, 00:10:09.598 "seek_data": false, 00:10:09.598 "copy": true, 00:10:09.598 "nvme_iov_md": false 00:10:09.598 }, 00:10:09.598 "memory_domains": [ 00:10:09.598 { 00:10:09.598 "dma_device_id": "system", 00:10:09.598 "dma_device_type": 1 00:10:09.598 }, 00:10:09.598 { 00:10:09.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.598 "dma_device_type": 2 00:10:09.598 } 00:10:09.598 ], 00:10:09.598 "driver_specific": {} 00:10:09.598 } 00:10:09.598 ] 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.598 [2024-12-12 16:06:35.703038] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.598 [2024-12-12 16:06:35.703196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.598 [2024-12-12 16:06:35.703246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.598 [2024-12-12 16:06:35.705615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.598 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.599 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.599 "name": "Existed_Raid", 00:10:09.599 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:09.599 "strip_size_kb": 0, 00:10:09.599 "state": "configuring", 00:10:09.599 "raid_level": "raid1", 00:10:09.599 "superblock": true, 00:10:09.599 "num_base_bdevs": 3, 00:10:09.599 "num_base_bdevs_discovered": 2, 00:10:09.599 "num_base_bdevs_operational": 3, 00:10:09.599 "base_bdevs_list": [ 00:10:09.599 { 00:10:09.599 "name": "BaseBdev1", 00:10:09.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.599 "is_configured": false, 00:10:09.599 "data_offset": 0, 00:10:09.599 "data_size": 0 00:10:09.599 }, 00:10:09.599 { 00:10:09.599 "name": "BaseBdev2", 00:10:09.599 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:09.599 "is_configured": true, 00:10:09.599 "data_offset": 2048, 00:10:09.599 "data_size": 63488 00:10:09.599 }, 00:10:09.599 { 00:10:09.599 "name": "BaseBdev3", 00:10:09.599 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:09.599 "is_configured": true, 00:10:09.599 "data_offset": 2048, 00:10:09.599 "data_size": 63488 00:10:09.599 } 00:10:09.599 ] 00:10:09.599 }' 00:10:09.599 16:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.599 16:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.859 [2024-12-12 16:06:36.166285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.859 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.119 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.119 "name": "Existed_Raid", 00:10:10.119 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:10.119 "strip_size_kb": 0, 00:10:10.119 "state": "configuring", 00:10:10.119 "raid_level": "raid1", 00:10:10.119 "superblock": true, 00:10:10.119 "num_base_bdevs": 3, 00:10:10.119 "num_base_bdevs_discovered": 1, 00:10:10.119 "num_base_bdevs_operational": 3, 00:10:10.119 "base_bdevs_list": [ 00:10:10.120 { 00:10:10.120 "name": "BaseBdev1", 00:10:10.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.120 "is_configured": false, 00:10:10.120 "data_offset": 0, 00:10:10.120 "data_size": 0 00:10:10.120 }, 00:10:10.120 { 00:10:10.120 "name": null, 00:10:10.120 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:10.120 "is_configured": false, 00:10:10.120 "data_offset": 0, 00:10:10.120 "data_size": 63488 00:10:10.120 }, 00:10:10.120 { 00:10:10.120 "name": "BaseBdev3", 00:10:10.120 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:10.120 "is_configured": true, 00:10:10.120 "data_offset": 2048, 00:10:10.120 "data_size": 63488 00:10:10.120 } 00:10:10.120 ] 00:10:10.120 }' 00:10:10.120 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.120 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.380 [2024-12-12 16:06:36.714420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.380 BaseBdev1 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.380 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.640 [ 00:10:10.640 { 00:10:10.640 "name": "BaseBdev1", 00:10:10.640 "aliases": [ 00:10:10.640 "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f" 00:10:10.640 ], 00:10:10.640 "product_name": "Malloc disk", 00:10:10.640 "block_size": 512, 00:10:10.640 "num_blocks": 65536, 00:10:10.640 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:10.640 "assigned_rate_limits": { 00:10:10.640 "rw_ios_per_sec": 0, 00:10:10.640 "rw_mbytes_per_sec": 0, 00:10:10.640 "r_mbytes_per_sec": 0, 00:10:10.640 "w_mbytes_per_sec": 0 00:10:10.640 }, 00:10:10.640 "claimed": true, 00:10:10.640 "claim_type": "exclusive_write", 00:10:10.640 "zoned": false, 00:10:10.640 "supported_io_types": { 00:10:10.640 "read": true, 00:10:10.640 "write": true, 00:10:10.640 "unmap": true, 00:10:10.640 "flush": true, 00:10:10.640 "reset": true, 00:10:10.640 "nvme_admin": false, 00:10:10.640 "nvme_io": false, 00:10:10.640 "nvme_io_md": false, 00:10:10.640 "write_zeroes": true, 00:10:10.640 "zcopy": true, 00:10:10.641 "get_zone_info": false, 00:10:10.641 "zone_management": false, 00:10:10.641 "zone_append": false, 00:10:10.641 "compare": false, 00:10:10.641 "compare_and_write": false, 00:10:10.641 "abort": true, 00:10:10.641 "seek_hole": false, 00:10:10.641 "seek_data": false, 00:10:10.641 "copy": true, 00:10:10.641 "nvme_iov_md": false 00:10:10.641 }, 00:10:10.641 "memory_domains": [ 00:10:10.641 { 00:10:10.641 "dma_device_id": "system", 00:10:10.641 "dma_device_type": 1 00:10:10.641 }, 00:10:10.641 { 00:10:10.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.641 "dma_device_type": 2 00:10:10.641 } 00:10:10.641 ], 00:10:10.641 "driver_specific": {} 00:10:10.641 } 00:10:10.641 ] 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.641 "name": "Existed_Raid", 00:10:10.641 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:10.641 "strip_size_kb": 0, 00:10:10.641 "state": "configuring", 00:10:10.641 "raid_level": "raid1", 00:10:10.641 "superblock": true, 00:10:10.641 "num_base_bdevs": 3, 00:10:10.641 "num_base_bdevs_discovered": 2, 00:10:10.641 "num_base_bdevs_operational": 3, 00:10:10.641 "base_bdevs_list": [ 00:10:10.641 { 00:10:10.641 "name": "BaseBdev1", 00:10:10.641 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:10.641 "is_configured": true, 00:10:10.641 "data_offset": 2048, 00:10:10.641 "data_size": 63488 00:10:10.641 }, 00:10:10.641 { 00:10:10.641 "name": null, 00:10:10.641 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:10.641 "is_configured": false, 00:10:10.641 "data_offset": 0, 00:10:10.641 "data_size": 63488 00:10:10.641 }, 00:10:10.641 { 00:10:10.641 "name": "BaseBdev3", 00:10:10.641 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:10.641 "is_configured": true, 00:10:10.641 "data_offset": 2048, 00:10:10.641 "data_size": 63488 00:10:10.641 } 00:10:10.641 ] 00:10:10.641 }' 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.641 16:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.900 [2024-12-12 16:06:37.229703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.900 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.160 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.160 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.160 "name": "Existed_Raid", 00:10:11.160 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:11.160 "strip_size_kb": 0, 00:10:11.160 "state": "configuring", 00:10:11.160 "raid_level": "raid1", 00:10:11.160 "superblock": true, 00:10:11.160 "num_base_bdevs": 3, 00:10:11.160 "num_base_bdevs_discovered": 1, 00:10:11.160 "num_base_bdevs_operational": 3, 00:10:11.160 "base_bdevs_list": [ 00:10:11.160 { 00:10:11.160 "name": "BaseBdev1", 00:10:11.160 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:11.160 "is_configured": true, 00:10:11.160 "data_offset": 2048, 00:10:11.160 "data_size": 63488 00:10:11.160 }, 00:10:11.160 { 00:10:11.160 "name": null, 00:10:11.160 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:11.160 "is_configured": false, 00:10:11.160 "data_offset": 0, 00:10:11.160 "data_size": 63488 00:10:11.160 }, 00:10:11.160 { 00:10:11.160 "name": null, 00:10:11.160 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:11.160 "is_configured": false, 00:10:11.160 "data_offset": 0, 00:10:11.160 "data_size": 63488 00:10:11.160 } 00:10:11.160 ] 00:10:11.160 }' 00:10:11.160 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.160 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.728 [2024-12-12 16:06:37.764858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.728 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.728 "name": "Existed_Raid", 00:10:11.728 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:11.728 "strip_size_kb": 0, 00:10:11.728 "state": "configuring", 00:10:11.729 "raid_level": "raid1", 00:10:11.729 "superblock": true, 00:10:11.729 "num_base_bdevs": 3, 00:10:11.729 "num_base_bdevs_discovered": 2, 00:10:11.729 "num_base_bdevs_operational": 3, 00:10:11.729 "base_bdevs_list": [ 00:10:11.729 { 00:10:11.729 "name": "BaseBdev1", 00:10:11.729 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:11.729 "is_configured": true, 00:10:11.729 "data_offset": 2048, 00:10:11.729 "data_size": 63488 00:10:11.729 }, 00:10:11.729 { 00:10:11.729 "name": null, 00:10:11.729 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:11.729 "is_configured": false, 00:10:11.729 "data_offset": 0, 00:10:11.729 "data_size": 63488 00:10:11.729 }, 00:10:11.729 { 00:10:11.729 "name": "BaseBdev3", 00:10:11.729 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:11.729 "is_configured": true, 00:10:11.729 "data_offset": 2048, 00:10:11.729 "data_size": 63488 00:10:11.729 } 00:10:11.729 ] 00:10:11.729 }' 00:10:11.729 16:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.729 16:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.991 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.991 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.991 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.991 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.991 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.991 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:11.991 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.992 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.992 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.992 [2024-12-12 16:06:38.240123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.251 "name": "Existed_Raid", 00:10:12.251 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:12.251 "strip_size_kb": 0, 00:10:12.251 "state": "configuring", 00:10:12.251 "raid_level": "raid1", 00:10:12.251 "superblock": true, 00:10:12.251 "num_base_bdevs": 3, 00:10:12.251 "num_base_bdevs_discovered": 1, 00:10:12.251 "num_base_bdevs_operational": 3, 00:10:12.251 "base_bdevs_list": [ 00:10:12.251 { 00:10:12.251 "name": null, 00:10:12.251 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:12.251 "is_configured": false, 00:10:12.251 "data_offset": 0, 00:10:12.251 "data_size": 63488 00:10:12.251 }, 00:10:12.251 { 00:10:12.251 "name": null, 00:10:12.251 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:12.251 "is_configured": false, 00:10:12.251 "data_offset": 0, 00:10:12.251 "data_size": 63488 00:10:12.251 }, 00:10:12.251 { 00:10:12.251 "name": "BaseBdev3", 00:10:12.251 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:12.251 "is_configured": true, 00:10:12.251 "data_offset": 2048, 00:10:12.251 "data_size": 63488 00:10:12.251 } 00:10:12.251 ] 00:10:12.251 }' 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.251 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.511 [2024-12-12 16:06:38.835886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.511 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.771 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.771 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.771 "name": "Existed_Raid", 00:10:12.771 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:12.771 "strip_size_kb": 0, 00:10:12.771 "state": "configuring", 00:10:12.771 "raid_level": "raid1", 00:10:12.771 "superblock": true, 00:10:12.771 "num_base_bdevs": 3, 00:10:12.771 "num_base_bdevs_discovered": 2, 00:10:12.771 "num_base_bdevs_operational": 3, 00:10:12.771 "base_bdevs_list": [ 00:10:12.771 { 00:10:12.771 "name": null, 00:10:12.771 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:12.771 "is_configured": false, 00:10:12.771 "data_offset": 0, 00:10:12.771 "data_size": 63488 00:10:12.771 }, 00:10:12.771 { 00:10:12.771 "name": "BaseBdev2", 00:10:12.771 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:12.771 "is_configured": true, 00:10:12.771 "data_offset": 2048, 00:10:12.771 "data_size": 63488 00:10:12.771 }, 00:10:12.771 { 00:10:12.771 "name": "BaseBdev3", 00:10:12.771 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:12.771 "is_configured": true, 00:10:12.771 "data_offset": 2048, 00:10:12.771 "data_size": 63488 00:10:12.771 } 00:10:12.771 ] 00:10:12.771 }' 00:10:12.771 16:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.771 16:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.031 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.031 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9cde99fd-ada0-4066-a4e9-fe91dbedbb1f 00:10:13.031 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.031 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.290 [2024-12-12 16:06:39.385753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:13.290 [2024-12-12 16:06:39.386137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.290 [2024-12-12 16:06:39.386186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.290 [2024-12-12 16:06:39.386496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:13.290 [2024-12-12 16:06:39.386688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.290 NewBaseBdev 00:10:13.290 [2024-12-12 16:06:39.386735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:13.290 [2024-12-12 16:06:39.386937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.290 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.290 [ 00:10:13.290 { 00:10:13.290 "name": "NewBaseBdev", 00:10:13.290 "aliases": [ 00:10:13.290 "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f" 00:10:13.290 ], 00:10:13.290 "product_name": "Malloc disk", 00:10:13.290 "block_size": 512, 00:10:13.290 "num_blocks": 65536, 00:10:13.290 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:13.290 "assigned_rate_limits": { 00:10:13.290 "rw_ios_per_sec": 0, 00:10:13.290 "rw_mbytes_per_sec": 0, 00:10:13.290 "r_mbytes_per_sec": 0, 00:10:13.290 "w_mbytes_per_sec": 0 00:10:13.290 }, 00:10:13.290 "claimed": true, 00:10:13.290 "claim_type": "exclusive_write", 00:10:13.290 "zoned": false, 00:10:13.290 "supported_io_types": { 00:10:13.290 "read": true, 00:10:13.290 "write": true, 00:10:13.290 "unmap": true, 00:10:13.290 "flush": true, 00:10:13.290 "reset": true, 00:10:13.290 "nvme_admin": false, 00:10:13.290 "nvme_io": false, 00:10:13.290 "nvme_io_md": false, 00:10:13.290 "write_zeroes": true, 00:10:13.290 "zcopy": true, 00:10:13.290 "get_zone_info": false, 00:10:13.290 "zone_management": false, 00:10:13.290 "zone_append": false, 00:10:13.290 "compare": false, 00:10:13.290 "compare_and_write": false, 00:10:13.290 "abort": true, 00:10:13.290 "seek_hole": false, 00:10:13.290 "seek_data": false, 00:10:13.290 "copy": true, 00:10:13.290 "nvme_iov_md": false 00:10:13.290 }, 00:10:13.290 "memory_domains": [ 00:10:13.290 { 00:10:13.290 "dma_device_id": "system", 00:10:13.290 "dma_device_type": 1 00:10:13.290 }, 00:10:13.290 { 00:10:13.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.290 "dma_device_type": 2 00:10:13.290 } 00:10:13.290 ], 00:10:13.290 "driver_specific": {} 00:10:13.291 } 00:10:13.291 ] 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.291 "name": "Existed_Raid", 00:10:13.291 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:13.291 "strip_size_kb": 0, 00:10:13.291 "state": "online", 00:10:13.291 "raid_level": "raid1", 00:10:13.291 "superblock": true, 00:10:13.291 "num_base_bdevs": 3, 00:10:13.291 "num_base_bdevs_discovered": 3, 00:10:13.291 "num_base_bdevs_operational": 3, 00:10:13.291 "base_bdevs_list": [ 00:10:13.291 { 00:10:13.291 "name": "NewBaseBdev", 00:10:13.291 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:13.291 "is_configured": true, 00:10:13.291 "data_offset": 2048, 00:10:13.291 "data_size": 63488 00:10:13.291 }, 00:10:13.291 { 00:10:13.291 "name": "BaseBdev2", 00:10:13.291 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:13.291 "is_configured": true, 00:10:13.291 "data_offset": 2048, 00:10:13.291 "data_size": 63488 00:10:13.291 }, 00:10:13.291 { 00:10:13.291 "name": "BaseBdev3", 00:10:13.291 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:13.291 "is_configured": true, 00:10:13.291 "data_offset": 2048, 00:10:13.291 "data_size": 63488 00:10:13.291 } 00:10:13.291 ] 00:10:13.291 }' 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.291 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.550 [2024-12-12 16:06:39.877281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.550 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.810 "name": "Existed_Raid", 00:10:13.810 "aliases": [ 00:10:13.810 "8ddfcf62-14ba-4119-b5f5-3a713d1f940e" 00:10:13.810 ], 00:10:13.810 "product_name": "Raid Volume", 00:10:13.810 "block_size": 512, 00:10:13.810 "num_blocks": 63488, 00:10:13.810 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:13.810 "assigned_rate_limits": { 00:10:13.810 "rw_ios_per_sec": 0, 00:10:13.810 "rw_mbytes_per_sec": 0, 00:10:13.810 "r_mbytes_per_sec": 0, 00:10:13.810 "w_mbytes_per_sec": 0 00:10:13.810 }, 00:10:13.810 "claimed": false, 00:10:13.810 "zoned": false, 00:10:13.810 "supported_io_types": { 00:10:13.810 "read": true, 00:10:13.810 "write": true, 00:10:13.810 "unmap": false, 00:10:13.810 "flush": false, 00:10:13.810 "reset": true, 00:10:13.810 "nvme_admin": false, 00:10:13.810 "nvme_io": false, 00:10:13.810 "nvme_io_md": false, 00:10:13.810 "write_zeroes": true, 00:10:13.810 "zcopy": false, 00:10:13.810 "get_zone_info": false, 00:10:13.810 "zone_management": false, 00:10:13.810 "zone_append": false, 00:10:13.810 "compare": false, 00:10:13.810 "compare_and_write": false, 00:10:13.810 "abort": false, 00:10:13.810 "seek_hole": false, 00:10:13.810 "seek_data": false, 00:10:13.810 "copy": false, 00:10:13.810 "nvme_iov_md": false 00:10:13.810 }, 00:10:13.810 "memory_domains": [ 00:10:13.810 { 00:10:13.810 "dma_device_id": "system", 00:10:13.810 "dma_device_type": 1 00:10:13.810 }, 00:10:13.810 { 00:10:13.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.810 "dma_device_type": 2 00:10:13.810 }, 00:10:13.810 { 00:10:13.810 "dma_device_id": "system", 00:10:13.810 "dma_device_type": 1 00:10:13.810 }, 00:10:13.810 { 00:10:13.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.810 "dma_device_type": 2 00:10:13.810 }, 00:10:13.810 { 00:10:13.810 "dma_device_id": "system", 00:10:13.810 "dma_device_type": 1 00:10:13.810 }, 00:10:13.810 { 00:10:13.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.810 "dma_device_type": 2 00:10:13.810 } 00:10:13.810 ], 00:10:13.810 "driver_specific": { 00:10:13.810 "raid": { 00:10:13.810 "uuid": "8ddfcf62-14ba-4119-b5f5-3a713d1f940e", 00:10:13.810 "strip_size_kb": 0, 00:10:13.810 "state": "online", 00:10:13.810 "raid_level": "raid1", 00:10:13.810 "superblock": true, 00:10:13.810 "num_base_bdevs": 3, 00:10:13.810 "num_base_bdevs_discovered": 3, 00:10:13.810 "num_base_bdevs_operational": 3, 00:10:13.810 "base_bdevs_list": [ 00:10:13.810 { 00:10:13.810 "name": "NewBaseBdev", 00:10:13.810 "uuid": "9cde99fd-ada0-4066-a4e9-fe91dbedbb1f", 00:10:13.810 "is_configured": true, 00:10:13.810 "data_offset": 2048, 00:10:13.810 "data_size": 63488 00:10:13.810 }, 00:10:13.810 { 00:10:13.810 "name": "BaseBdev2", 00:10:13.810 "uuid": "e0e56360-5a89-4b98-9dab-26c952702827", 00:10:13.810 "is_configured": true, 00:10:13.810 "data_offset": 2048, 00:10:13.810 "data_size": 63488 00:10:13.810 }, 00:10:13.810 { 00:10:13.810 "name": "BaseBdev3", 00:10:13.810 "uuid": "969ff8f3-1371-4256-bf1a-3bea9608c0b4", 00:10:13.810 "is_configured": true, 00:10:13.810 "data_offset": 2048, 00:10:13.810 "data_size": 63488 00:10:13.810 } 00:10:13.810 ] 00:10:13.810 } 00:10:13.810 } 00:10:13.810 }' 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.810 BaseBdev2 00:10:13.810 BaseBdev3' 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.810 16:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.810 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.811 [2024-12-12 16:06:40.152513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.811 [2024-12-12 16:06:40.152648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.811 [2024-12-12 16:06:40.152748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.811 [2024-12-12 16:06:40.153100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.811 [2024-12-12 16:06:40.153114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70045 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70045 ']' 00:10:13.811 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70045 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70045 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70045' 00:10:14.070 killing process with pid 70045 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70045 00:10:14.070 [2024-12-12 16:06:40.201125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.070 16:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70045 00:10:14.329 [2024-12-12 16:06:40.528572] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.433 16:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.433 00:10:15.433 real 0m10.883s 00:10:15.433 user 0m17.041s 00:10:15.433 sys 0m1.932s 00:10:15.433 16:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.433 16:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.433 ************************************ 00:10:15.433 END TEST raid_state_function_test_sb 00:10:15.433 ************************************ 00:10:15.693 16:06:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:15.693 16:06:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.693 16:06:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.693 16:06:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 ************************************ 00:10:15.693 START TEST raid_superblock_test 00:10:15.693 ************************************ 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70665 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70665 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70665 ']' 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.693 16:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 [2024-12-12 16:06:41.936229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:15.693 [2024-12-12 16:06:41.936356] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70665 ] 00:10:15.952 [2024-12-12 16:06:42.113967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.952 [2024-12-12 16:06:42.249761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.211 [2024-12-12 16:06:42.492525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.211 [2024-12-12 16:06:42.492605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.471 malloc1 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.471 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.731 [2024-12-12 16:06:42.822616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.731 [2024-12-12 16:06:42.822776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.731 [2024-12-12 16:06:42.822819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.731 [2024-12-12 16:06:42.822849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.731 [2024-12-12 16:06:42.825351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.731 [2024-12-12 16:06:42.825439] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.731 pt1 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.731 malloc2 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.731 [2024-12-12 16:06:42.889957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.731 [2024-12-12 16:06:42.890014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.731 [2024-12-12 16:06:42.890039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.731 [2024-12-12 16:06:42.890048] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.731 [2024-12-12 16:06:42.892405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.731 [2024-12-12 16:06:42.892442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.731 pt2 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.731 malloc3 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.731 [2024-12-12 16:06:42.965704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.731 [2024-12-12 16:06:42.965840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.731 [2024-12-12 16:06:42.965879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.731 [2024-12-12 16:06:42.965923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.731 [2024-12-12 16:06:42.968261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.731 [2024-12-12 16:06:42.968353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.731 pt3 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.731 [2024-12-12 16:06:42.977731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.731 [2024-12-12 16:06:42.979822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.731 [2024-12-12 16:06:42.979944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.731 [2024-12-12 16:06:42.980133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:16.731 [2024-12-12 16:06:42.980192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.731 [2024-12-12 16:06:42.980453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.731 [2024-12-12 16:06:42.980670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:16.731 [2024-12-12 16:06:42.980716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:16.731 [2024-12-12 16:06:42.980901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.731 16:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.731 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.731 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.731 "name": "raid_bdev1", 00:10:16.731 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:16.731 "strip_size_kb": 0, 00:10:16.731 "state": "online", 00:10:16.731 "raid_level": "raid1", 00:10:16.732 "superblock": true, 00:10:16.732 "num_base_bdevs": 3, 00:10:16.732 "num_base_bdevs_discovered": 3, 00:10:16.732 "num_base_bdevs_operational": 3, 00:10:16.732 "base_bdevs_list": [ 00:10:16.732 { 00:10:16.732 "name": "pt1", 00:10:16.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.732 "is_configured": true, 00:10:16.732 "data_offset": 2048, 00:10:16.732 "data_size": 63488 00:10:16.732 }, 00:10:16.732 { 00:10:16.732 "name": "pt2", 00:10:16.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.732 "is_configured": true, 00:10:16.732 "data_offset": 2048, 00:10:16.732 "data_size": 63488 00:10:16.732 }, 00:10:16.732 { 00:10:16.732 "name": "pt3", 00:10:16.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.732 "is_configured": true, 00:10:16.732 "data_offset": 2048, 00:10:16.732 "data_size": 63488 00:10:16.732 } 00:10:16.732 ] 00:10:16.732 }' 00:10:16.732 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.732 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 [2024-12-12 16:06:43.421363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.300 "name": "raid_bdev1", 00:10:17.300 "aliases": [ 00:10:17.300 "66448ceb-eb6b-4686-b106-a038b90bf1eb" 00:10:17.300 ], 00:10:17.300 "product_name": "Raid Volume", 00:10:17.300 "block_size": 512, 00:10:17.300 "num_blocks": 63488, 00:10:17.300 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:17.300 "assigned_rate_limits": { 00:10:17.300 "rw_ios_per_sec": 0, 00:10:17.300 "rw_mbytes_per_sec": 0, 00:10:17.300 "r_mbytes_per_sec": 0, 00:10:17.300 "w_mbytes_per_sec": 0 00:10:17.300 }, 00:10:17.300 "claimed": false, 00:10:17.300 "zoned": false, 00:10:17.300 "supported_io_types": { 00:10:17.300 "read": true, 00:10:17.300 "write": true, 00:10:17.300 "unmap": false, 00:10:17.300 "flush": false, 00:10:17.300 "reset": true, 00:10:17.300 "nvme_admin": false, 00:10:17.300 "nvme_io": false, 00:10:17.300 "nvme_io_md": false, 00:10:17.300 "write_zeroes": true, 00:10:17.300 "zcopy": false, 00:10:17.300 "get_zone_info": false, 00:10:17.300 "zone_management": false, 00:10:17.300 "zone_append": false, 00:10:17.300 "compare": false, 00:10:17.300 "compare_and_write": false, 00:10:17.300 "abort": false, 00:10:17.300 "seek_hole": false, 00:10:17.300 "seek_data": false, 00:10:17.300 "copy": false, 00:10:17.300 "nvme_iov_md": false 00:10:17.300 }, 00:10:17.300 "memory_domains": [ 00:10:17.300 { 00:10:17.300 "dma_device_id": "system", 00:10:17.300 "dma_device_type": 1 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.300 "dma_device_type": 2 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "dma_device_id": "system", 00:10:17.300 "dma_device_type": 1 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.300 "dma_device_type": 2 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "dma_device_id": "system", 00:10:17.300 "dma_device_type": 1 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.300 "dma_device_type": 2 00:10:17.300 } 00:10:17.300 ], 00:10:17.300 "driver_specific": { 00:10:17.300 "raid": { 00:10:17.300 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:17.300 "strip_size_kb": 0, 00:10:17.300 "state": "online", 00:10:17.300 "raid_level": "raid1", 00:10:17.300 "superblock": true, 00:10:17.300 "num_base_bdevs": 3, 00:10:17.300 "num_base_bdevs_discovered": 3, 00:10:17.300 "num_base_bdevs_operational": 3, 00:10:17.300 "base_bdevs_list": [ 00:10:17.300 { 00:10:17.300 "name": "pt1", 00:10:17.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.300 "is_configured": true, 00:10:17.300 "data_offset": 2048, 00:10:17.300 "data_size": 63488 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "name": "pt2", 00:10:17.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.300 "is_configured": true, 00:10:17.300 "data_offset": 2048, 00:10:17.300 "data_size": 63488 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "name": "pt3", 00:10:17.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.300 "is_configured": true, 00:10:17.300 "data_offset": 2048, 00:10:17.300 "data_size": 63488 00:10:17.300 } 00:10:17.300 ] 00:10:17.300 } 00:10:17.300 } 00:10:17.300 }' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.300 pt2 00:10:17.300 pt3' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.300 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.301 [2024-12-12 16:06:43.608866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.301 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=66448ceb-eb6b-4686-b106-a038b90bf1eb 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 66448ceb-eb6b-4686-b106-a038b90bf1eb ']' 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 [2024-12-12 16:06:43.656508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.560 [2024-12-12 16:06:43.656537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.560 [2024-12-12 16:06:43.656622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.560 [2024-12-12 16:06:43.656708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.560 [2024-12-12 16:06:43.656718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.560 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.561 [2024-12-12 16:06:43.788404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:17.561 [2024-12-12 16:06:43.790578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:17.561 [2024-12-12 16:06:43.790738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:17.561 [2024-12-12 16:06:43.790804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:17.561 [2024-12-12 16:06:43.790865] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:17.561 [2024-12-12 16:06:43.790885] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:17.561 [2024-12-12 16:06:43.790911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.561 [2024-12-12 16:06:43.790922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:17.561 request: 00:10:17.561 { 00:10:17.561 "name": "raid_bdev1", 00:10:17.561 "raid_level": "raid1", 00:10:17.561 "base_bdevs": [ 00:10:17.561 "malloc1", 00:10:17.561 "malloc2", 00:10:17.561 "malloc3" 00:10:17.561 ], 00:10:17.561 "superblock": false, 00:10:17.561 "method": "bdev_raid_create", 00:10:17.561 "req_id": 1 00:10:17.561 } 00:10:17.561 Got JSON-RPC error response 00:10:17.561 response: 00:10:17.561 { 00:10:17.561 "code": -17, 00:10:17.561 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:17.561 } 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.561 [2024-12-12 16:06:43.848173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.561 [2024-12-12 16:06:43.848264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.561 [2024-12-12 16:06:43.848302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:17.561 [2024-12-12 16:06:43.848346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.561 [2024-12-12 16:06:43.850795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.561 [2024-12-12 16:06:43.850874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.561 [2024-12-12 16:06:43.850995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.561 [2024-12-12 16:06:43.851077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.561 pt1 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.561 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.821 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.821 "name": "raid_bdev1", 00:10:17.821 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:17.821 "strip_size_kb": 0, 00:10:17.821 "state": "configuring", 00:10:17.821 "raid_level": "raid1", 00:10:17.821 "superblock": true, 00:10:17.821 "num_base_bdevs": 3, 00:10:17.821 "num_base_bdevs_discovered": 1, 00:10:17.821 "num_base_bdevs_operational": 3, 00:10:17.821 "base_bdevs_list": [ 00:10:17.821 { 00:10:17.821 "name": "pt1", 00:10:17.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.821 "is_configured": true, 00:10:17.821 "data_offset": 2048, 00:10:17.821 "data_size": 63488 00:10:17.821 }, 00:10:17.821 { 00:10:17.821 "name": null, 00:10:17.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.821 "is_configured": false, 00:10:17.821 "data_offset": 2048, 00:10:17.821 "data_size": 63488 00:10:17.821 }, 00:10:17.821 { 00:10:17.821 "name": null, 00:10:17.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.821 "is_configured": false, 00:10:17.821 "data_offset": 2048, 00:10:17.821 "data_size": 63488 00:10:17.821 } 00:10:17.821 ] 00:10:17.821 }' 00:10:17.821 16:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.821 16:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.080 [2024-12-12 16:06:44.315545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.080 [2024-12-12 16:06:44.315647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.080 [2024-12-12 16:06:44.315674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:18.080 [2024-12-12 16:06:44.315683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.080 [2024-12-12 16:06:44.316227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.080 [2024-12-12 16:06:44.316248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.080 [2024-12-12 16:06:44.316349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.080 [2024-12-12 16:06:44.316376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.080 pt2 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.080 [2024-12-12 16:06:44.323482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.080 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.080 "name": "raid_bdev1", 00:10:18.080 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:18.080 "strip_size_kb": 0, 00:10:18.080 "state": "configuring", 00:10:18.080 "raid_level": "raid1", 00:10:18.080 "superblock": true, 00:10:18.080 "num_base_bdevs": 3, 00:10:18.080 "num_base_bdevs_discovered": 1, 00:10:18.080 "num_base_bdevs_operational": 3, 00:10:18.080 "base_bdevs_list": [ 00:10:18.080 { 00:10:18.080 "name": "pt1", 00:10:18.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.080 "is_configured": true, 00:10:18.080 "data_offset": 2048, 00:10:18.080 "data_size": 63488 00:10:18.080 }, 00:10:18.080 { 00:10:18.080 "name": null, 00:10:18.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.080 "is_configured": false, 00:10:18.080 "data_offset": 0, 00:10:18.080 "data_size": 63488 00:10:18.080 }, 00:10:18.080 { 00:10:18.081 "name": null, 00:10:18.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.081 "is_configured": false, 00:10:18.081 "data_offset": 2048, 00:10:18.081 "data_size": 63488 00:10:18.081 } 00:10:18.081 ] 00:10:18.081 }' 00:10:18.081 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.081 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.650 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:18.650 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.650 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.650 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.650 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.650 [2024-12-12 16:06:44.730803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.650 [2024-12-12 16:06:44.731024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.650 [2024-12-12 16:06:44.731069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:18.650 [2024-12-12 16:06:44.731109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.650 [2024-12-12 16:06:44.731703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.650 [2024-12-12 16:06:44.731770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.650 [2024-12-12 16:06:44.731962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.650 [2024-12-12 16:06:44.732037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.650 pt2 00:10:18.650 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.651 [2024-12-12 16:06:44.742753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.651 [2024-12-12 16:06:44.742858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.651 [2024-12-12 16:06:44.742900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:18.651 [2024-12-12 16:06:44.742932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.651 [2024-12-12 16:06:44.743406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.651 [2024-12-12 16:06:44.743470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.651 [2024-12-12 16:06:44.743561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.651 [2024-12-12 16:06:44.743588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.651 [2024-12-12 16:06:44.743746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.651 [2024-12-12 16:06:44.743761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.651 [2024-12-12 16:06:44.744054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:18.651 [2024-12-12 16:06:44.744228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.651 [2024-12-12 16:06:44.744244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:18.651 [2024-12-12 16:06:44.744409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.651 pt3 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.651 "name": "raid_bdev1", 00:10:18.651 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:18.651 "strip_size_kb": 0, 00:10:18.651 "state": "online", 00:10:18.651 "raid_level": "raid1", 00:10:18.651 "superblock": true, 00:10:18.651 "num_base_bdevs": 3, 00:10:18.651 "num_base_bdevs_discovered": 3, 00:10:18.651 "num_base_bdevs_operational": 3, 00:10:18.651 "base_bdevs_list": [ 00:10:18.651 { 00:10:18.651 "name": "pt1", 00:10:18.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.651 "is_configured": true, 00:10:18.651 "data_offset": 2048, 00:10:18.651 "data_size": 63488 00:10:18.651 }, 00:10:18.651 { 00:10:18.651 "name": "pt2", 00:10:18.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.651 "is_configured": true, 00:10:18.651 "data_offset": 2048, 00:10:18.651 "data_size": 63488 00:10:18.651 }, 00:10:18.651 { 00:10:18.651 "name": "pt3", 00:10:18.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.651 "is_configured": true, 00:10:18.651 "data_offset": 2048, 00:10:18.651 "data_size": 63488 00:10:18.651 } 00:10:18.651 ] 00:10:18.651 }' 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.651 16:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.911 [2024-12-12 16:06:45.154426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.911 "name": "raid_bdev1", 00:10:18.911 "aliases": [ 00:10:18.911 "66448ceb-eb6b-4686-b106-a038b90bf1eb" 00:10:18.911 ], 00:10:18.911 "product_name": "Raid Volume", 00:10:18.911 "block_size": 512, 00:10:18.911 "num_blocks": 63488, 00:10:18.911 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:18.911 "assigned_rate_limits": { 00:10:18.911 "rw_ios_per_sec": 0, 00:10:18.911 "rw_mbytes_per_sec": 0, 00:10:18.911 "r_mbytes_per_sec": 0, 00:10:18.911 "w_mbytes_per_sec": 0 00:10:18.911 }, 00:10:18.911 "claimed": false, 00:10:18.911 "zoned": false, 00:10:18.911 "supported_io_types": { 00:10:18.911 "read": true, 00:10:18.911 "write": true, 00:10:18.911 "unmap": false, 00:10:18.911 "flush": false, 00:10:18.911 "reset": true, 00:10:18.911 "nvme_admin": false, 00:10:18.911 "nvme_io": false, 00:10:18.911 "nvme_io_md": false, 00:10:18.911 "write_zeroes": true, 00:10:18.911 "zcopy": false, 00:10:18.911 "get_zone_info": false, 00:10:18.911 "zone_management": false, 00:10:18.911 "zone_append": false, 00:10:18.911 "compare": false, 00:10:18.911 "compare_and_write": false, 00:10:18.911 "abort": false, 00:10:18.911 "seek_hole": false, 00:10:18.911 "seek_data": false, 00:10:18.911 "copy": false, 00:10:18.911 "nvme_iov_md": false 00:10:18.911 }, 00:10:18.911 "memory_domains": [ 00:10:18.911 { 00:10:18.911 "dma_device_id": "system", 00:10:18.911 "dma_device_type": 1 00:10:18.911 }, 00:10:18.911 { 00:10:18.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.911 "dma_device_type": 2 00:10:18.911 }, 00:10:18.911 { 00:10:18.911 "dma_device_id": "system", 00:10:18.911 "dma_device_type": 1 00:10:18.911 }, 00:10:18.911 { 00:10:18.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.911 "dma_device_type": 2 00:10:18.911 }, 00:10:18.911 { 00:10:18.911 "dma_device_id": "system", 00:10:18.911 "dma_device_type": 1 00:10:18.911 }, 00:10:18.911 { 00:10:18.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.911 "dma_device_type": 2 00:10:18.911 } 00:10:18.911 ], 00:10:18.911 "driver_specific": { 00:10:18.911 "raid": { 00:10:18.911 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:18.911 "strip_size_kb": 0, 00:10:18.911 "state": "online", 00:10:18.911 "raid_level": "raid1", 00:10:18.911 "superblock": true, 00:10:18.911 "num_base_bdevs": 3, 00:10:18.911 "num_base_bdevs_discovered": 3, 00:10:18.911 "num_base_bdevs_operational": 3, 00:10:18.911 "base_bdevs_list": [ 00:10:18.911 { 00:10:18.911 "name": "pt1", 00:10:18.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.911 "is_configured": true, 00:10:18.911 "data_offset": 2048, 00:10:18.911 "data_size": 63488 00:10:18.911 }, 00:10:18.911 { 00:10:18.911 "name": "pt2", 00:10:18.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.911 "is_configured": true, 00:10:18.911 "data_offset": 2048, 00:10:18.911 "data_size": 63488 00:10:18.911 }, 00:10:18.911 { 00:10:18.911 "name": "pt3", 00:10:18.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.911 "is_configured": true, 00:10:18.911 "data_offset": 2048, 00:10:18.911 "data_size": 63488 00:10:18.911 } 00:10:18.911 ] 00:10:18.911 } 00:10:18.911 } 00:10:18.911 }' 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.911 pt2 00:10:18.911 pt3' 00:10:18.911 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 [2024-12-12 16:06:45.401959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 66448ceb-eb6b-4686-b106-a038b90bf1eb '!=' 66448ceb-eb6b-4686-b106-a038b90bf1eb ']' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 [2024-12-12 16:06:45.449685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.171 "name": "raid_bdev1", 00:10:19.171 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:19.171 "strip_size_kb": 0, 00:10:19.171 "state": "online", 00:10:19.171 "raid_level": "raid1", 00:10:19.171 "superblock": true, 00:10:19.171 "num_base_bdevs": 3, 00:10:19.171 "num_base_bdevs_discovered": 2, 00:10:19.171 "num_base_bdevs_operational": 2, 00:10:19.171 "base_bdevs_list": [ 00:10:19.171 { 00:10:19.171 "name": null, 00:10:19.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.171 "is_configured": false, 00:10:19.171 "data_offset": 0, 00:10:19.171 "data_size": 63488 00:10:19.171 }, 00:10:19.171 { 00:10:19.171 "name": "pt2", 00:10:19.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.171 "is_configured": true, 00:10:19.171 "data_offset": 2048, 00:10:19.171 "data_size": 63488 00:10:19.171 }, 00:10:19.171 { 00:10:19.171 "name": "pt3", 00:10:19.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.171 "is_configured": true, 00:10:19.171 "data_offset": 2048, 00:10:19.171 "data_size": 63488 00:10:19.171 } 00:10:19.171 ] 00:10:19.171 }' 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.171 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.739 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.739 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.739 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.739 [2024-12-12 16:06:45.884977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.739 [2024-12-12 16:06:45.885112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.739 [2024-12-12 16:06:45.885233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.740 [2024-12-12 16:06:45.885328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.740 [2024-12-12 16:06:45.885379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 [2024-12-12 16:06:45.968738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.740 [2024-12-12 16:06:45.968798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.740 [2024-12-12 16:06:45.968816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:19.740 [2024-12-12 16:06:45.968827] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.740 [2024-12-12 16:06:45.971381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.740 [2024-12-12 16:06:45.971466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.740 [2024-12-12 16:06:45.971558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.740 [2024-12-12 16:06:45.971620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.740 pt2 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 16:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.740 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.740 "name": "raid_bdev1", 00:10:19.740 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:19.740 "strip_size_kb": 0, 00:10:19.740 "state": "configuring", 00:10:19.740 "raid_level": "raid1", 00:10:19.740 "superblock": true, 00:10:19.740 "num_base_bdevs": 3, 00:10:19.740 "num_base_bdevs_discovered": 1, 00:10:19.740 "num_base_bdevs_operational": 2, 00:10:19.740 "base_bdevs_list": [ 00:10:19.740 { 00:10:19.740 "name": null, 00:10:19.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.740 "is_configured": false, 00:10:19.740 "data_offset": 2048, 00:10:19.740 "data_size": 63488 00:10:19.740 }, 00:10:19.740 { 00:10:19.740 "name": "pt2", 00:10:19.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.740 "is_configured": true, 00:10:19.740 "data_offset": 2048, 00:10:19.740 "data_size": 63488 00:10:19.740 }, 00:10:19.740 { 00:10:19.740 "name": null, 00:10:19.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.740 "is_configured": false, 00:10:19.740 "data_offset": 2048, 00:10:19.740 "data_size": 63488 00:10:19.740 } 00:10:19.740 ] 00:10:19.740 }' 00:10:19.740 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.740 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.310 [2024-12-12 16:06:46.412051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.310 [2024-12-12 16:06:46.412206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.310 [2024-12-12 16:06:46.412246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:20.310 [2024-12-12 16:06:46.412278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.310 [2024-12-12 16:06:46.412839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.310 [2024-12-12 16:06:46.412919] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.310 [2024-12-12 16:06:46.413066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.310 [2024-12-12 16:06:46.413130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.310 [2024-12-12 16:06:46.413278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.310 [2024-12-12 16:06:46.413317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.310 [2024-12-12 16:06:46.413637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:20.310 [2024-12-12 16:06:46.413844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.310 [2024-12-12 16:06:46.413886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:20.310 [2024-12-12 16:06:46.414097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.310 pt3 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.310 "name": "raid_bdev1", 00:10:20.310 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:20.310 "strip_size_kb": 0, 00:10:20.310 "state": "online", 00:10:20.310 "raid_level": "raid1", 00:10:20.310 "superblock": true, 00:10:20.310 "num_base_bdevs": 3, 00:10:20.310 "num_base_bdevs_discovered": 2, 00:10:20.310 "num_base_bdevs_operational": 2, 00:10:20.310 "base_bdevs_list": [ 00:10:20.310 { 00:10:20.310 "name": null, 00:10:20.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.310 "is_configured": false, 00:10:20.310 "data_offset": 2048, 00:10:20.310 "data_size": 63488 00:10:20.310 }, 00:10:20.310 { 00:10:20.310 "name": "pt2", 00:10:20.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.310 "is_configured": true, 00:10:20.310 "data_offset": 2048, 00:10:20.310 "data_size": 63488 00:10:20.310 }, 00:10:20.310 { 00:10:20.310 "name": "pt3", 00:10:20.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.310 "is_configured": true, 00:10:20.310 "data_offset": 2048, 00:10:20.310 "data_size": 63488 00:10:20.310 } 00:10:20.310 ] 00:10:20.310 }' 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.310 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.569 [2024-12-12 16:06:46.863434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.569 [2024-12-12 16:06:46.863489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.569 [2024-12-12 16:06:46.863591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.569 [2024-12-12 16:06:46.863674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.569 [2024-12-12 16:06:46.863685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:20.569 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 [2024-12-12 16:06:46.939274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.828 [2024-12-12 16:06:46.939407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.828 [2024-12-12 16:06:46.939431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:20.828 [2024-12-12 16:06:46.939441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.828 [2024-12-12 16:06:46.941996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.828 [2024-12-12 16:06:46.942029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.828 [2024-12-12 16:06:46.942114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:20.828 [2024-12-12 16:06:46.942162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.828 [2024-12-12 16:06:46.942302] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:20.828 [2024-12-12 16:06:46.942312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.828 [2024-12-12 16:06:46.942329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:20.829 [2024-12-12 16:06:46.942396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.829 pt1 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.829 "name": "raid_bdev1", 00:10:20.829 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:20.829 "strip_size_kb": 0, 00:10:20.829 "state": "configuring", 00:10:20.829 "raid_level": "raid1", 00:10:20.829 "superblock": true, 00:10:20.829 "num_base_bdevs": 3, 00:10:20.829 "num_base_bdevs_discovered": 1, 00:10:20.829 "num_base_bdevs_operational": 2, 00:10:20.829 "base_bdevs_list": [ 00:10:20.829 { 00:10:20.829 "name": null, 00:10:20.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.829 "is_configured": false, 00:10:20.829 "data_offset": 2048, 00:10:20.829 "data_size": 63488 00:10:20.829 }, 00:10:20.829 { 00:10:20.829 "name": "pt2", 00:10:20.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.829 "is_configured": true, 00:10:20.829 "data_offset": 2048, 00:10:20.829 "data_size": 63488 00:10:20.829 }, 00:10:20.829 { 00:10:20.829 "name": null, 00:10:20.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.829 "is_configured": false, 00:10:20.829 "data_offset": 2048, 00:10:20.829 "data_size": 63488 00:10:20.829 } 00:10:20.829 ] 00:10:20.829 }' 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.829 16:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.087 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.347 [2024-12-12 16:06:47.438442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.347 [2024-12-12 16:06:47.438522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.347 [2024-12-12 16:06:47.438550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:21.347 [2024-12-12 16:06:47.438559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.347 [2024-12-12 16:06:47.439108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.347 [2024-12-12 16:06:47.439126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.347 [2024-12-12 16:06:47.439215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.347 [2024-12-12 16:06:47.439240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.347 [2024-12-12 16:06:47.439375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:21.347 [2024-12-12 16:06:47.439383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.347 [2024-12-12 16:06:47.439654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:21.347 [2024-12-12 16:06:47.439829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:21.347 [2024-12-12 16:06:47.439845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:21.347 [2024-12-12 16:06:47.440007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.347 pt3 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.347 "name": "raid_bdev1", 00:10:21.347 "uuid": "66448ceb-eb6b-4686-b106-a038b90bf1eb", 00:10:21.347 "strip_size_kb": 0, 00:10:21.347 "state": "online", 00:10:21.347 "raid_level": "raid1", 00:10:21.347 "superblock": true, 00:10:21.347 "num_base_bdevs": 3, 00:10:21.347 "num_base_bdevs_discovered": 2, 00:10:21.347 "num_base_bdevs_operational": 2, 00:10:21.347 "base_bdevs_list": [ 00:10:21.347 { 00:10:21.347 "name": null, 00:10:21.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.347 "is_configured": false, 00:10:21.347 "data_offset": 2048, 00:10:21.347 "data_size": 63488 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "name": "pt2", 00:10:21.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.347 "is_configured": true, 00:10:21.347 "data_offset": 2048, 00:10:21.347 "data_size": 63488 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "name": "pt3", 00:10:21.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.347 "is_configured": true, 00:10:21.347 "data_offset": 2048, 00:10:21.347 "data_size": 63488 00:10:21.347 } 00:10:21.347 ] 00:10:21.347 }' 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.347 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:21.607 [2024-12-12 16:06:47.854066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 66448ceb-eb6b-4686-b106-a038b90bf1eb '!=' 66448ceb-eb6b-4686-b106-a038b90bf1eb ']' 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70665 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70665 ']' 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70665 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70665 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70665' 00:10:21.607 killing process with pid 70665 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70665 00:10:21.607 [2024-12-12 16:06:47.927118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.607 [2024-12-12 16:06:47.927217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.607 [2024-12-12 16:06:47.927285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.607 [2024-12-12 16:06:47.927298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:21.607 16:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70665 00:10:22.174 [2024-12-12 16:06:48.259566] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.553 16:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.553 00:10:23.553 real 0m7.644s 00:10:23.553 user 0m11.717s 00:10:23.553 sys 0m1.398s 00:10:23.553 16:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.553 16:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.553 ************************************ 00:10:23.553 END TEST raid_superblock_test 00:10:23.553 ************************************ 00:10:23.553 16:06:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:23.553 16:06:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.553 16:06:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.553 16:06:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.553 ************************************ 00:10:23.553 START TEST raid_read_error_test 00:10:23.553 ************************************ 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.553 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qTkeTXArHN 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71115 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71115 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71115 ']' 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.554 16:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.554 [2024-12-12 16:06:49.671296] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:23.554 [2024-12-12 16:06:49.671421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71115 ] 00:10:23.554 [2024-12-12 16:06:49.846291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.812 [2024-12-12 16:06:49.980267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.071 [2024-12-12 16:06:50.220686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.071 [2024-12-12 16:06:50.220764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 BaseBdev1_malloc 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 true 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 [2024-12-12 16:06:50.537951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.330 [2024-12-12 16:06:50.538019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.330 [2024-12-12 16:06:50.538039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.330 [2024-12-12 16:06:50.538050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.330 [2024-12-12 16:06:50.540311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.330 [2024-12-12 16:06:50.540344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.330 BaseBdev1 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 BaseBdev2_malloc 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 true 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 [2024-12-12 16:06:50.606503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.330 [2024-12-12 16:06:50.606555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.330 [2024-12-12 16:06:50.606572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.330 [2024-12-12 16:06:50.606583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.330 [2024-12-12 16:06:50.608866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.330 [2024-12-12 16:06:50.608909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.330 BaseBdev2 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.330 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 BaseBdev3_malloc 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 true 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 [2024-12-12 16:06:50.699260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.590 [2024-12-12 16:06:50.699319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.590 [2024-12-12 16:06:50.699337] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.590 [2024-12-12 16:06:50.699349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.590 [2024-12-12 16:06:50.701736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.590 [2024-12-12 16:06:50.701770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.590 BaseBdev3 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 [2024-12-12 16:06:50.711318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.590 [2024-12-12 16:06:50.713343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.590 [2024-12-12 16:06:50.713417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.590 [2024-12-12 16:06:50.713630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.590 [2024-12-12 16:06:50.713647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.590 [2024-12-12 16:06:50.713901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:24.590 [2024-12-12 16:06:50.714092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.590 [2024-12-12 16:06:50.714109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:24.590 [2024-12-12 16:06:50.714249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.590 "name": "raid_bdev1", 00:10:24.590 "uuid": "aeb870a8-1be0-4ab5-8ea0-e1f1a92ce86c", 00:10:24.590 "strip_size_kb": 0, 00:10:24.590 "state": "online", 00:10:24.590 "raid_level": "raid1", 00:10:24.590 "superblock": true, 00:10:24.590 "num_base_bdevs": 3, 00:10:24.590 "num_base_bdevs_discovered": 3, 00:10:24.590 "num_base_bdevs_operational": 3, 00:10:24.590 "base_bdevs_list": [ 00:10:24.590 { 00:10:24.590 "name": "BaseBdev1", 00:10:24.590 "uuid": "fa833a08-93ff-53c8-b6af-b2630fd602e9", 00:10:24.590 "is_configured": true, 00:10:24.590 "data_offset": 2048, 00:10:24.590 "data_size": 63488 00:10:24.590 }, 00:10:24.590 { 00:10:24.590 "name": "BaseBdev2", 00:10:24.590 "uuid": "2e69ec97-0883-55de-8e7f-0bc5b183d0da", 00:10:24.590 "is_configured": true, 00:10:24.590 "data_offset": 2048, 00:10:24.590 "data_size": 63488 00:10:24.590 }, 00:10:24.590 { 00:10:24.590 "name": "BaseBdev3", 00:10:24.590 "uuid": "413b707a-49b3-5346-bf71-e669036216f0", 00:10:24.590 "is_configured": true, 00:10:24.590 "data_offset": 2048, 00:10:24.590 "data_size": 63488 00:10:24.590 } 00:10:24.590 ] 00:10:24.590 }' 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.590 16:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.850 16:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.850 16:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.110 [2024-12-12 16:06:51.256039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.047 "name": "raid_bdev1", 00:10:26.047 "uuid": "aeb870a8-1be0-4ab5-8ea0-e1f1a92ce86c", 00:10:26.047 "strip_size_kb": 0, 00:10:26.047 "state": "online", 00:10:26.047 "raid_level": "raid1", 00:10:26.047 "superblock": true, 00:10:26.047 "num_base_bdevs": 3, 00:10:26.047 "num_base_bdevs_discovered": 3, 00:10:26.047 "num_base_bdevs_operational": 3, 00:10:26.047 "base_bdevs_list": [ 00:10:26.047 { 00:10:26.047 "name": "BaseBdev1", 00:10:26.047 "uuid": "fa833a08-93ff-53c8-b6af-b2630fd602e9", 00:10:26.047 "is_configured": true, 00:10:26.047 "data_offset": 2048, 00:10:26.047 "data_size": 63488 00:10:26.047 }, 00:10:26.047 { 00:10:26.047 "name": "BaseBdev2", 00:10:26.047 "uuid": "2e69ec97-0883-55de-8e7f-0bc5b183d0da", 00:10:26.047 "is_configured": true, 00:10:26.047 "data_offset": 2048, 00:10:26.047 "data_size": 63488 00:10:26.047 }, 00:10:26.047 { 00:10:26.047 "name": "BaseBdev3", 00:10:26.047 "uuid": "413b707a-49b3-5346-bf71-e669036216f0", 00:10:26.047 "is_configured": true, 00:10:26.047 "data_offset": 2048, 00:10:26.047 "data_size": 63488 00:10:26.047 } 00:10:26.047 ] 00:10:26.047 }' 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.047 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.307 [2024-12-12 16:06:52.587540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.307 [2024-12-12 16:06:52.587592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.307 [2024-12-12 16:06:52.590309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.307 [2024-12-12 16:06:52.590365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.307 [2024-12-12 16:06:52.590477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.307 [2024-12-12 16:06:52.590493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:26.307 { 00:10:26.307 "results": [ 00:10:26.307 { 00:10:26.307 "job": "raid_bdev1", 00:10:26.307 "core_mask": "0x1", 00:10:26.307 "workload": "randrw", 00:10:26.307 "percentage": 50, 00:10:26.307 "status": "finished", 00:10:26.307 "queue_depth": 1, 00:10:26.307 "io_size": 131072, 00:10:26.307 "runtime": 1.332062, 00:10:26.307 "iops": 9916.205101564341, 00:10:26.307 "mibps": 1239.5256376955426, 00:10:26.307 "io_failed": 0, 00:10:26.307 "io_timeout": 0, 00:10:26.307 "avg_latency_us": 98.17338119007783, 00:10:26.307 "min_latency_us": 23.923144104803495, 00:10:26.307 "max_latency_us": 1631.2454148471616 00:10:26.307 } 00:10:26.307 ], 00:10:26.307 "core_count": 1 00:10:26.307 } 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71115 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71115 ']' 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71115 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71115 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.307 killing process with pid 71115 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71115' 00:10:26.307 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71115 00:10:26.308 [2024-12-12 16:06:52.634121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.308 16:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71115 00:10:26.573 [2024-12-12 16:06:52.895080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qTkeTXArHN 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:27.961 00:10:27.961 real 0m4.650s 00:10:27.961 user 0m5.349s 00:10:27.961 sys 0m0.664s 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.961 16:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.961 ************************************ 00:10:27.961 END TEST raid_read_error_test 00:10:27.961 ************************************ 00:10:27.961 16:06:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:27.961 16:06:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:27.961 16:06:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.961 16:06:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.961 ************************************ 00:10:27.961 START TEST raid_write_error_test 00:10:27.961 ************************************ 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sBJakxgAMV 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71256 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71256 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71256 ']' 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.961 16:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.221 [2024-12-12 16:06:54.390760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:28.221 [2024-12-12 16:06:54.390876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71256 ] 00:10:28.221 [2024-12-12 16:06:54.566623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.481 [2024-12-12 16:06:54.707349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.739 [2024-12-12 16:06:54.948443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.739 [2024-12-12 16:06:54.948518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.998 BaseBdev1_malloc 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.998 true 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.998 [2024-12-12 16:06:55.299936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.998 [2024-12-12 16:06:55.300005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.998 [2024-12-12 16:06:55.300030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:28.998 [2024-12-12 16:06:55.300043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.998 [2024-12-12 16:06:55.302472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.998 [2024-12-12 16:06:55.302508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.998 BaseBdev1 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.998 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.257 BaseBdev2_malloc 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.257 true 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.257 [2024-12-12 16:06:55.374027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.257 [2024-12-12 16:06:55.374091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.257 [2024-12-12 16:06:55.374109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.257 [2024-12-12 16:06:55.374120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.257 [2024-12-12 16:06:55.376465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.257 [2024-12-12 16:06:55.376499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.257 BaseBdev2 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.257 BaseBdev3_malloc 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.257 true 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.257 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.257 [2024-12-12 16:06:55.462507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.258 [2024-12-12 16:06:55.462574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.258 [2024-12-12 16:06:55.462594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.258 [2024-12-12 16:06:55.462606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.258 [2024-12-12 16:06:55.465355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.258 [2024-12-12 16:06:55.465398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.258 BaseBdev3 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.258 [2024-12-12 16:06:55.474563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.258 [2024-12-12 16:06:55.476759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.258 [2024-12-12 16:06:55.476853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.258 [2024-12-12 16:06:55.477096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.258 [2024-12-12 16:06:55.477116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.258 [2024-12-12 16:06:55.477411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:29.258 [2024-12-12 16:06:55.477605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.258 [2024-12-12 16:06:55.477624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:29.258 [2024-12-12 16:06:55.477812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.258 "name": "raid_bdev1", 00:10:29.258 "uuid": "93814333-492d-446b-b80b-04b6beb8c5ad", 00:10:29.258 "strip_size_kb": 0, 00:10:29.258 "state": "online", 00:10:29.258 "raid_level": "raid1", 00:10:29.258 "superblock": true, 00:10:29.258 "num_base_bdevs": 3, 00:10:29.258 "num_base_bdevs_discovered": 3, 00:10:29.258 "num_base_bdevs_operational": 3, 00:10:29.258 "base_bdevs_list": [ 00:10:29.258 { 00:10:29.258 "name": "BaseBdev1", 00:10:29.258 "uuid": "cdf9e7d2-af73-568f-b00f-aa4bd54f5760", 00:10:29.258 "is_configured": true, 00:10:29.258 "data_offset": 2048, 00:10:29.258 "data_size": 63488 00:10:29.258 }, 00:10:29.258 { 00:10:29.258 "name": "BaseBdev2", 00:10:29.258 "uuid": "031d12e5-a854-5c3e-b97c-74698c9fc9b9", 00:10:29.258 "is_configured": true, 00:10:29.258 "data_offset": 2048, 00:10:29.258 "data_size": 63488 00:10:29.258 }, 00:10:29.258 { 00:10:29.258 "name": "BaseBdev3", 00:10:29.258 "uuid": "bb22cf89-c504-511a-9725-db4ac232961b", 00:10:29.258 "is_configured": true, 00:10:29.258 "data_offset": 2048, 00:10:29.258 "data_size": 63488 00:10:29.258 } 00:10:29.258 ] 00:10:29.258 }' 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.258 16:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.518 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.518 16:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.777 [2024-12-12 16:06:55.947357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.714 [2024-12-12 16:06:56.865970] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:30.714 [2024-12-12 16:06:56.866046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.714 [2024-12-12 16:06:56.866281] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.714 "name": "raid_bdev1", 00:10:30.714 "uuid": "93814333-492d-446b-b80b-04b6beb8c5ad", 00:10:30.714 "strip_size_kb": 0, 00:10:30.714 "state": "online", 00:10:30.714 "raid_level": "raid1", 00:10:30.714 "superblock": true, 00:10:30.714 "num_base_bdevs": 3, 00:10:30.714 "num_base_bdevs_discovered": 2, 00:10:30.714 "num_base_bdevs_operational": 2, 00:10:30.714 "base_bdevs_list": [ 00:10:30.714 { 00:10:30.714 "name": null, 00:10:30.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.714 "is_configured": false, 00:10:30.714 "data_offset": 0, 00:10:30.714 "data_size": 63488 00:10:30.714 }, 00:10:30.714 { 00:10:30.714 "name": "BaseBdev2", 00:10:30.714 "uuid": "031d12e5-a854-5c3e-b97c-74698c9fc9b9", 00:10:30.714 "is_configured": true, 00:10:30.714 "data_offset": 2048, 00:10:30.714 "data_size": 63488 00:10:30.714 }, 00:10:30.714 { 00:10:30.714 "name": "BaseBdev3", 00:10:30.714 "uuid": "bb22cf89-c504-511a-9725-db4ac232961b", 00:10:30.714 "is_configured": true, 00:10:30.714 "data_offset": 2048, 00:10:30.714 "data_size": 63488 00:10:30.714 } 00:10:30.714 ] 00:10:30.714 }' 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.714 16:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.973 [2024-12-12 16:06:57.268842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.973 [2024-12-12 16:06:57.268912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.973 [2024-12-12 16:06:57.271714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.973 [2024-12-12 16:06:57.271786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.973 [2024-12-12 16:06:57.271875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.973 [2024-12-12 16:06:57.271902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:30.973 { 00:10:30.973 "results": [ 00:10:30.973 { 00:10:30.973 "job": "raid_bdev1", 00:10:30.973 "core_mask": "0x1", 00:10:30.973 "workload": "randrw", 00:10:30.973 "percentage": 50, 00:10:30.973 "status": "finished", 00:10:30.973 "queue_depth": 1, 00:10:30.973 "io_size": 131072, 00:10:30.973 "runtime": 1.32214, 00:10:30.973 "iops": 11321.040131907363, 00:10:30.973 "mibps": 1415.1300164884203, 00:10:30.973 "io_failed": 0, 00:10:30.973 "io_timeout": 0, 00:10:30.973 "avg_latency_us": 85.65871646995396, 00:10:30.973 "min_latency_us": 23.923144104803495, 00:10:30.973 "max_latency_us": 1445.2262008733624 00:10:30.973 } 00:10:30.973 ], 00:10:30.973 "core_count": 1 00:10:30.973 } 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71256 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71256 ']' 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71256 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71256 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.973 killing process with pid 71256 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71256' 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71256 00:10:30.973 [2024-12-12 16:06:57.318636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.973 16:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71256 00:10:31.233 [2024-12-12 16:06:57.574191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sBJakxgAMV 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:32.611 00:10:32.611 real 0m4.619s 00:10:32.611 user 0m5.283s 00:10:32.611 sys 0m0.647s 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.611 16:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.611 ************************************ 00:10:32.611 END TEST raid_write_error_test 00:10:32.611 ************************************ 00:10:32.611 16:06:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:32.611 16:06:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:32.611 16:06:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:32.611 16:06:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.611 16:06:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.611 16:06:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.870 ************************************ 00:10:32.870 START TEST raid_state_function_test 00:10:32.870 ************************************ 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:32.870 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71400 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71400' 00:10:32.871 Process raid pid: 71400 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71400 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71400 ']' 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.871 16:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.871 [2024-12-12 16:06:59.075975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:32.871 [2024-12-12 16:06:59.076109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.130 [2024-12-12 16:06:59.250703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.130 [2024-12-12 16:06:59.390335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.390 [2024-12-12 16:06:59.643843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.390 [2024-12-12 16:06:59.643886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.649 [2024-12-12 16:06:59.905081] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.649 [2024-12-12 16:06:59.905143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.649 [2024-12-12 16:06:59.905160] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.649 [2024-12-12 16:06:59.905171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.649 [2024-12-12 16:06:59.905177] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.649 [2024-12-12 16:06:59.905186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.649 [2024-12-12 16:06:59.905192] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.649 [2024-12-12 16:06:59.905201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.649 "name": "Existed_Raid", 00:10:33.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.649 "strip_size_kb": 64, 00:10:33.649 "state": "configuring", 00:10:33.649 "raid_level": "raid0", 00:10:33.649 "superblock": false, 00:10:33.649 "num_base_bdevs": 4, 00:10:33.649 "num_base_bdevs_discovered": 0, 00:10:33.649 "num_base_bdevs_operational": 4, 00:10:33.649 "base_bdevs_list": [ 00:10:33.649 { 00:10:33.649 "name": "BaseBdev1", 00:10:33.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.649 "is_configured": false, 00:10:33.649 "data_offset": 0, 00:10:33.649 "data_size": 0 00:10:33.649 }, 00:10:33.649 { 00:10:33.649 "name": "BaseBdev2", 00:10:33.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.649 "is_configured": false, 00:10:33.649 "data_offset": 0, 00:10:33.649 "data_size": 0 00:10:33.649 }, 00:10:33.649 { 00:10:33.649 "name": "BaseBdev3", 00:10:33.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.649 "is_configured": false, 00:10:33.649 "data_offset": 0, 00:10:33.649 "data_size": 0 00:10:33.649 }, 00:10:33.649 { 00:10:33.649 "name": "BaseBdev4", 00:10:33.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.649 "is_configured": false, 00:10:33.649 "data_offset": 0, 00:10:33.649 "data_size": 0 00:10:33.649 } 00:10:33.649 ] 00:10:33.649 }' 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.649 16:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.218 [2024-12-12 16:07:00.284453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.218 [2024-12-12 16:07:00.284518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.218 [2024-12-12 16:07:00.296382] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.218 [2024-12-12 16:07:00.296436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.218 [2024-12-12 16:07:00.296447] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.218 [2024-12-12 16:07:00.296459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.218 [2024-12-12 16:07:00.296466] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.218 [2024-12-12 16:07:00.296477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.218 [2024-12-12 16:07:00.296483] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.218 [2024-12-12 16:07:00.296495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.218 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.218 [2024-12-12 16:07:00.356384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.219 BaseBdev1 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.219 [ 00:10:34.219 { 00:10:34.219 "name": "BaseBdev1", 00:10:34.219 "aliases": [ 00:10:34.219 "e39d5eb2-b7d0-439d-acf1-07706b91efab" 00:10:34.219 ], 00:10:34.219 "product_name": "Malloc disk", 00:10:34.219 "block_size": 512, 00:10:34.219 "num_blocks": 65536, 00:10:34.219 "uuid": "e39d5eb2-b7d0-439d-acf1-07706b91efab", 00:10:34.219 "assigned_rate_limits": { 00:10:34.219 "rw_ios_per_sec": 0, 00:10:34.219 "rw_mbytes_per_sec": 0, 00:10:34.219 "r_mbytes_per_sec": 0, 00:10:34.219 "w_mbytes_per_sec": 0 00:10:34.219 }, 00:10:34.219 "claimed": true, 00:10:34.219 "claim_type": "exclusive_write", 00:10:34.219 "zoned": false, 00:10:34.219 "supported_io_types": { 00:10:34.219 "read": true, 00:10:34.219 "write": true, 00:10:34.219 "unmap": true, 00:10:34.219 "flush": true, 00:10:34.219 "reset": true, 00:10:34.219 "nvme_admin": false, 00:10:34.219 "nvme_io": false, 00:10:34.219 "nvme_io_md": false, 00:10:34.219 "write_zeroes": true, 00:10:34.219 "zcopy": true, 00:10:34.219 "get_zone_info": false, 00:10:34.219 "zone_management": false, 00:10:34.219 "zone_append": false, 00:10:34.219 "compare": false, 00:10:34.219 "compare_and_write": false, 00:10:34.219 "abort": true, 00:10:34.219 "seek_hole": false, 00:10:34.219 "seek_data": false, 00:10:34.219 "copy": true, 00:10:34.219 "nvme_iov_md": false 00:10:34.219 }, 00:10:34.219 "memory_domains": [ 00:10:34.219 { 00:10:34.219 "dma_device_id": "system", 00:10:34.219 "dma_device_type": 1 00:10:34.219 }, 00:10:34.219 { 00:10:34.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.219 "dma_device_type": 2 00:10:34.219 } 00:10:34.219 ], 00:10:34.219 "driver_specific": {} 00:10:34.219 } 00:10:34.219 ] 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.219 "name": "Existed_Raid", 00:10:34.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.219 "strip_size_kb": 64, 00:10:34.219 "state": "configuring", 00:10:34.219 "raid_level": "raid0", 00:10:34.219 "superblock": false, 00:10:34.219 "num_base_bdevs": 4, 00:10:34.219 "num_base_bdevs_discovered": 1, 00:10:34.219 "num_base_bdevs_operational": 4, 00:10:34.219 "base_bdevs_list": [ 00:10:34.219 { 00:10:34.219 "name": "BaseBdev1", 00:10:34.219 "uuid": "e39d5eb2-b7d0-439d-acf1-07706b91efab", 00:10:34.219 "is_configured": true, 00:10:34.219 "data_offset": 0, 00:10:34.219 "data_size": 65536 00:10:34.219 }, 00:10:34.219 { 00:10:34.219 "name": "BaseBdev2", 00:10:34.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.219 "is_configured": false, 00:10:34.219 "data_offset": 0, 00:10:34.219 "data_size": 0 00:10:34.219 }, 00:10:34.219 { 00:10:34.219 "name": "BaseBdev3", 00:10:34.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.219 "is_configured": false, 00:10:34.219 "data_offset": 0, 00:10:34.219 "data_size": 0 00:10:34.219 }, 00:10:34.219 { 00:10:34.219 "name": "BaseBdev4", 00:10:34.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.219 "is_configured": false, 00:10:34.219 "data_offset": 0, 00:10:34.219 "data_size": 0 00:10:34.219 } 00:10:34.219 ] 00:10:34.219 }' 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.219 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.483 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.483 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.483 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.483 [2024-12-12 16:07:00.827726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.483 [2024-12-12 16:07:00.827810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.749 [2024-12-12 16:07:00.839711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.749 [2024-12-12 16:07:00.841822] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.749 [2024-12-12 16:07:00.841869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.749 [2024-12-12 16:07:00.841879] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.749 [2024-12-12 16:07:00.841899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.749 [2024-12-12 16:07:00.841907] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.749 [2024-12-12 16:07:00.841915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.749 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.750 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.750 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.750 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.750 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.750 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.750 "name": "Existed_Raid", 00:10:34.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.750 "strip_size_kb": 64, 00:10:34.750 "state": "configuring", 00:10:34.750 "raid_level": "raid0", 00:10:34.750 "superblock": false, 00:10:34.750 "num_base_bdevs": 4, 00:10:34.750 "num_base_bdevs_discovered": 1, 00:10:34.750 "num_base_bdevs_operational": 4, 00:10:34.750 "base_bdevs_list": [ 00:10:34.750 { 00:10:34.750 "name": "BaseBdev1", 00:10:34.750 "uuid": "e39d5eb2-b7d0-439d-acf1-07706b91efab", 00:10:34.750 "is_configured": true, 00:10:34.750 "data_offset": 0, 00:10:34.750 "data_size": 65536 00:10:34.750 }, 00:10:34.750 { 00:10:34.750 "name": "BaseBdev2", 00:10:34.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.750 "is_configured": false, 00:10:34.750 "data_offset": 0, 00:10:34.750 "data_size": 0 00:10:34.750 }, 00:10:34.750 { 00:10:34.750 "name": "BaseBdev3", 00:10:34.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.750 "is_configured": false, 00:10:34.750 "data_offset": 0, 00:10:34.750 "data_size": 0 00:10:34.750 }, 00:10:34.750 { 00:10:34.750 "name": "BaseBdev4", 00:10:34.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.750 "is_configured": false, 00:10:34.750 "data_offset": 0, 00:10:34.750 "data_size": 0 00:10:34.750 } 00:10:34.750 ] 00:10:34.750 }' 00:10:34.750 16:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.750 16:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.009 [2024-12-12 16:07:01.348972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.009 BaseBdev2 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.009 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.267 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.267 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.267 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 [ 00:10:35.267 { 00:10:35.267 "name": "BaseBdev2", 00:10:35.267 "aliases": [ 00:10:35.267 "db63d6da-13bc-40c1-bd76-470ab6b5e483" 00:10:35.267 ], 00:10:35.267 "product_name": "Malloc disk", 00:10:35.267 "block_size": 512, 00:10:35.267 "num_blocks": 65536, 00:10:35.267 "uuid": "db63d6da-13bc-40c1-bd76-470ab6b5e483", 00:10:35.267 "assigned_rate_limits": { 00:10:35.267 "rw_ios_per_sec": 0, 00:10:35.267 "rw_mbytes_per_sec": 0, 00:10:35.267 "r_mbytes_per_sec": 0, 00:10:35.267 "w_mbytes_per_sec": 0 00:10:35.267 }, 00:10:35.267 "claimed": true, 00:10:35.267 "claim_type": "exclusive_write", 00:10:35.267 "zoned": false, 00:10:35.267 "supported_io_types": { 00:10:35.267 "read": true, 00:10:35.267 "write": true, 00:10:35.267 "unmap": true, 00:10:35.267 "flush": true, 00:10:35.267 "reset": true, 00:10:35.267 "nvme_admin": false, 00:10:35.267 "nvme_io": false, 00:10:35.267 "nvme_io_md": false, 00:10:35.267 "write_zeroes": true, 00:10:35.267 "zcopy": true, 00:10:35.267 "get_zone_info": false, 00:10:35.267 "zone_management": false, 00:10:35.267 "zone_append": false, 00:10:35.267 "compare": false, 00:10:35.267 "compare_and_write": false, 00:10:35.267 "abort": true, 00:10:35.267 "seek_hole": false, 00:10:35.267 "seek_data": false, 00:10:35.267 "copy": true, 00:10:35.267 "nvme_iov_md": false 00:10:35.268 }, 00:10:35.268 "memory_domains": [ 00:10:35.268 { 00:10:35.268 "dma_device_id": "system", 00:10:35.268 "dma_device_type": 1 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.268 "dma_device_type": 2 00:10:35.268 } 00:10:35.268 ], 00:10:35.268 "driver_specific": {} 00:10:35.268 } 00:10:35.268 ] 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.268 "name": "Existed_Raid", 00:10:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.268 "strip_size_kb": 64, 00:10:35.268 "state": "configuring", 00:10:35.268 "raid_level": "raid0", 00:10:35.268 "superblock": false, 00:10:35.268 "num_base_bdevs": 4, 00:10:35.268 "num_base_bdevs_discovered": 2, 00:10:35.268 "num_base_bdevs_operational": 4, 00:10:35.268 "base_bdevs_list": [ 00:10:35.268 { 00:10:35.268 "name": "BaseBdev1", 00:10:35.268 "uuid": "e39d5eb2-b7d0-439d-acf1-07706b91efab", 00:10:35.268 "is_configured": true, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 65536 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": "BaseBdev2", 00:10:35.268 "uuid": "db63d6da-13bc-40c1-bd76-470ab6b5e483", 00:10:35.268 "is_configured": true, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 65536 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": "BaseBdev3", 00:10:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.268 "is_configured": false, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 0 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": "BaseBdev4", 00:10:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.268 "is_configured": false, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 0 00:10:35.268 } 00:10:35.268 ] 00:10:35.268 }' 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.268 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.527 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.527 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.527 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.786 [2024-12-12 16:07:01.884358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.786 BaseBdev3 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.786 [ 00:10:35.786 { 00:10:35.786 "name": "BaseBdev3", 00:10:35.786 "aliases": [ 00:10:35.786 "9c100a69-cdfa-4183-90b2-936c4555c73e" 00:10:35.786 ], 00:10:35.786 "product_name": "Malloc disk", 00:10:35.786 "block_size": 512, 00:10:35.786 "num_blocks": 65536, 00:10:35.786 "uuid": "9c100a69-cdfa-4183-90b2-936c4555c73e", 00:10:35.786 "assigned_rate_limits": { 00:10:35.786 "rw_ios_per_sec": 0, 00:10:35.786 "rw_mbytes_per_sec": 0, 00:10:35.786 "r_mbytes_per_sec": 0, 00:10:35.786 "w_mbytes_per_sec": 0 00:10:35.786 }, 00:10:35.786 "claimed": true, 00:10:35.786 "claim_type": "exclusive_write", 00:10:35.786 "zoned": false, 00:10:35.786 "supported_io_types": { 00:10:35.786 "read": true, 00:10:35.786 "write": true, 00:10:35.786 "unmap": true, 00:10:35.786 "flush": true, 00:10:35.786 "reset": true, 00:10:35.786 "nvme_admin": false, 00:10:35.786 "nvme_io": false, 00:10:35.786 "nvme_io_md": false, 00:10:35.786 "write_zeroes": true, 00:10:35.786 "zcopy": true, 00:10:35.786 "get_zone_info": false, 00:10:35.786 "zone_management": false, 00:10:35.786 "zone_append": false, 00:10:35.786 "compare": false, 00:10:35.786 "compare_and_write": false, 00:10:35.786 "abort": true, 00:10:35.786 "seek_hole": false, 00:10:35.786 "seek_data": false, 00:10:35.786 "copy": true, 00:10:35.786 "nvme_iov_md": false 00:10:35.786 }, 00:10:35.786 "memory_domains": [ 00:10:35.786 { 00:10:35.786 "dma_device_id": "system", 00:10:35.786 "dma_device_type": 1 00:10:35.786 }, 00:10:35.786 { 00:10:35.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.786 "dma_device_type": 2 00:10:35.786 } 00:10:35.786 ], 00:10:35.786 "driver_specific": {} 00:10:35.786 } 00:10:35.786 ] 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.786 "name": "Existed_Raid", 00:10:35.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.786 "strip_size_kb": 64, 00:10:35.786 "state": "configuring", 00:10:35.786 "raid_level": "raid0", 00:10:35.786 "superblock": false, 00:10:35.786 "num_base_bdevs": 4, 00:10:35.786 "num_base_bdevs_discovered": 3, 00:10:35.786 "num_base_bdevs_operational": 4, 00:10:35.786 "base_bdevs_list": [ 00:10:35.786 { 00:10:35.786 "name": "BaseBdev1", 00:10:35.786 "uuid": "e39d5eb2-b7d0-439d-acf1-07706b91efab", 00:10:35.786 "is_configured": true, 00:10:35.786 "data_offset": 0, 00:10:35.786 "data_size": 65536 00:10:35.786 }, 00:10:35.786 { 00:10:35.786 "name": "BaseBdev2", 00:10:35.786 "uuid": "db63d6da-13bc-40c1-bd76-470ab6b5e483", 00:10:35.786 "is_configured": true, 00:10:35.786 "data_offset": 0, 00:10:35.786 "data_size": 65536 00:10:35.786 }, 00:10:35.786 { 00:10:35.786 "name": "BaseBdev3", 00:10:35.786 "uuid": "9c100a69-cdfa-4183-90b2-936c4555c73e", 00:10:35.786 "is_configured": true, 00:10:35.786 "data_offset": 0, 00:10:35.786 "data_size": 65536 00:10:35.786 }, 00:10:35.786 { 00:10:35.786 "name": "BaseBdev4", 00:10:35.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.786 "is_configured": false, 00:10:35.786 "data_offset": 0, 00:10:35.786 "data_size": 0 00:10:35.786 } 00:10:35.786 ] 00:10:35.786 }' 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.786 16:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.046 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.046 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.046 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.305 [2024-12-12 16:07:02.407281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.305 [2024-12-12 16:07:02.407417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.305 [2024-12-12 16:07:02.407443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:36.305 [2024-12-12 16:07:02.407806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:36.305 [2024-12-12 16:07:02.408044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.305 [2024-12-12 16:07:02.408094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:36.305 [2024-12-12 16:07:02.408425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.305 BaseBdev4 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.305 [ 00:10:36.305 { 00:10:36.305 "name": "BaseBdev4", 00:10:36.305 "aliases": [ 00:10:36.305 "40d1e5ee-e0c3-4d76-a61b-41c5fd22442f" 00:10:36.305 ], 00:10:36.305 "product_name": "Malloc disk", 00:10:36.305 "block_size": 512, 00:10:36.305 "num_blocks": 65536, 00:10:36.305 "uuid": "40d1e5ee-e0c3-4d76-a61b-41c5fd22442f", 00:10:36.305 "assigned_rate_limits": { 00:10:36.305 "rw_ios_per_sec": 0, 00:10:36.305 "rw_mbytes_per_sec": 0, 00:10:36.305 "r_mbytes_per_sec": 0, 00:10:36.305 "w_mbytes_per_sec": 0 00:10:36.305 }, 00:10:36.305 "claimed": true, 00:10:36.305 "claim_type": "exclusive_write", 00:10:36.305 "zoned": false, 00:10:36.305 "supported_io_types": { 00:10:36.305 "read": true, 00:10:36.305 "write": true, 00:10:36.305 "unmap": true, 00:10:36.305 "flush": true, 00:10:36.305 "reset": true, 00:10:36.305 "nvme_admin": false, 00:10:36.305 "nvme_io": false, 00:10:36.305 "nvme_io_md": false, 00:10:36.305 "write_zeroes": true, 00:10:36.305 "zcopy": true, 00:10:36.305 "get_zone_info": false, 00:10:36.305 "zone_management": false, 00:10:36.305 "zone_append": false, 00:10:36.305 "compare": false, 00:10:36.305 "compare_and_write": false, 00:10:36.305 "abort": true, 00:10:36.305 "seek_hole": false, 00:10:36.305 "seek_data": false, 00:10:36.305 "copy": true, 00:10:36.305 "nvme_iov_md": false 00:10:36.305 }, 00:10:36.305 "memory_domains": [ 00:10:36.305 { 00:10:36.305 "dma_device_id": "system", 00:10:36.305 "dma_device_type": 1 00:10:36.305 }, 00:10:36.305 { 00:10:36.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.305 "dma_device_type": 2 00:10:36.305 } 00:10:36.305 ], 00:10:36.305 "driver_specific": {} 00:10:36.305 } 00:10:36.305 ] 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.305 "name": "Existed_Raid", 00:10:36.305 "uuid": "aae530f9-2b85-4a31-8f48-88a6cf4327bd", 00:10:36.305 "strip_size_kb": 64, 00:10:36.305 "state": "online", 00:10:36.305 "raid_level": "raid0", 00:10:36.305 "superblock": false, 00:10:36.305 "num_base_bdevs": 4, 00:10:36.305 "num_base_bdevs_discovered": 4, 00:10:36.305 "num_base_bdevs_operational": 4, 00:10:36.305 "base_bdevs_list": [ 00:10:36.305 { 00:10:36.305 "name": "BaseBdev1", 00:10:36.305 "uuid": "e39d5eb2-b7d0-439d-acf1-07706b91efab", 00:10:36.305 "is_configured": true, 00:10:36.305 "data_offset": 0, 00:10:36.305 "data_size": 65536 00:10:36.305 }, 00:10:36.305 { 00:10:36.305 "name": "BaseBdev2", 00:10:36.305 "uuid": "db63d6da-13bc-40c1-bd76-470ab6b5e483", 00:10:36.305 "is_configured": true, 00:10:36.305 "data_offset": 0, 00:10:36.305 "data_size": 65536 00:10:36.305 }, 00:10:36.305 { 00:10:36.305 "name": "BaseBdev3", 00:10:36.305 "uuid": "9c100a69-cdfa-4183-90b2-936c4555c73e", 00:10:36.305 "is_configured": true, 00:10:36.305 "data_offset": 0, 00:10:36.305 "data_size": 65536 00:10:36.305 }, 00:10:36.305 { 00:10:36.305 "name": "BaseBdev4", 00:10:36.305 "uuid": "40d1e5ee-e0c3-4d76-a61b-41c5fd22442f", 00:10:36.305 "is_configured": true, 00:10:36.305 "data_offset": 0, 00:10:36.305 "data_size": 65536 00:10:36.305 } 00:10:36.305 ] 00:10:36.305 }' 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.305 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.564 [2024-12-12 16:07:02.874965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.564 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.564 "name": "Existed_Raid", 00:10:36.564 "aliases": [ 00:10:36.564 "aae530f9-2b85-4a31-8f48-88a6cf4327bd" 00:10:36.564 ], 00:10:36.564 "product_name": "Raid Volume", 00:10:36.564 "block_size": 512, 00:10:36.564 "num_blocks": 262144, 00:10:36.564 "uuid": "aae530f9-2b85-4a31-8f48-88a6cf4327bd", 00:10:36.564 "assigned_rate_limits": { 00:10:36.564 "rw_ios_per_sec": 0, 00:10:36.564 "rw_mbytes_per_sec": 0, 00:10:36.564 "r_mbytes_per_sec": 0, 00:10:36.564 "w_mbytes_per_sec": 0 00:10:36.564 }, 00:10:36.564 "claimed": false, 00:10:36.565 "zoned": false, 00:10:36.565 "supported_io_types": { 00:10:36.565 "read": true, 00:10:36.565 "write": true, 00:10:36.565 "unmap": true, 00:10:36.565 "flush": true, 00:10:36.565 "reset": true, 00:10:36.565 "nvme_admin": false, 00:10:36.565 "nvme_io": false, 00:10:36.565 "nvme_io_md": false, 00:10:36.565 "write_zeroes": true, 00:10:36.565 "zcopy": false, 00:10:36.565 "get_zone_info": false, 00:10:36.565 "zone_management": false, 00:10:36.565 "zone_append": false, 00:10:36.565 "compare": false, 00:10:36.565 "compare_and_write": false, 00:10:36.565 "abort": false, 00:10:36.565 "seek_hole": false, 00:10:36.565 "seek_data": false, 00:10:36.565 "copy": false, 00:10:36.565 "nvme_iov_md": false 00:10:36.565 }, 00:10:36.565 "memory_domains": [ 00:10:36.565 { 00:10:36.565 "dma_device_id": "system", 00:10:36.565 "dma_device_type": 1 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.565 "dma_device_type": 2 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "dma_device_id": "system", 00:10:36.565 "dma_device_type": 1 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.565 "dma_device_type": 2 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "dma_device_id": "system", 00:10:36.565 "dma_device_type": 1 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.565 "dma_device_type": 2 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "dma_device_id": "system", 00:10:36.565 "dma_device_type": 1 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.565 "dma_device_type": 2 00:10:36.565 } 00:10:36.565 ], 00:10:36.565 "driver_specific": { 00:10:36.565 "raid": { 00:10:36.565 "uuid": "aae530f9-2b85-4a31-8f48-88a6cf4327bd", 00:10:36.565 "strip_size_kb": 64, 00:10:36.565 "state": "online", 00:10:36.565 "raid_level": "raid0", 00:10:36.565 "superblock": false, 00:10:36.565 "num_base_bdevs": 4, 00:10:36.565 "num_base_bdevs_discovered": 4, 00:10:36.565 "num_base_bdevs_operational": 4, 00:10:36.565 "base_bdevs_list": [ 00:10:36.565 { 00:10:36.565 "name": "BaseBdev1", 00:10:36.565 "uuid": "e39d5eb2-b7d0-439d-acf1-07706b91efab", 00:10:36.565 "is_configured": true, 00:10:36.565 "data_offset": 0, 00:10:36.565 "data_size": 65536 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "name": "BaseBdev2", 00:10:36.565 "uuid": "db63d6da-13bc-40c1-bd76-470ab6b5e483", 00:10:36.565 "is_configured": true, 00:10:36.565 "data_offset": 0, 00:10:36.565 "data_size": 65536 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "name": "BaseBdev3", 00:10:36.565 "uuid": "9c100a69-cdfa-4183-90b2-936c4555c73e", 00:10:36.565 "is_configured": true, 00:10:36.565 "data_offset": 0, 00:10:36.565 "data_size": 65536 00:10:36.565 }, 00:10:36.565 { 00:10:36.565 "name": "BaseBdev4", 00:10:36.565 "uuid": "40d1e5ee-e0c3-4d76-a61b-41c5fd22442f", 00:10:36.565 "is_configured": true, 00:10:36.565 "data_offset": 0, 00:10:36.565 "data_size": 65536 00:10:36.565 } 00:10:36.565 ] 00:10:36.565 } 00:10:36.565 } 00:10:36.565 }' 00:10:36.565 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.824 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:36.824 BaseBdev2 00:10:36.824 BaseBdev3 00:10:36.824 BaseBdev4' 00:10:36.824 16:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.824 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.083 [2024-12-12 16:07:03.190097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.083 [2024-12-12 16:07:03.190139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.083 [2024-12-12 16:07:03.190195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.083 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.084 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.084 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.084 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.084 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.084 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.084 "name": "Existed_Raid", 00:10:37.084 "uuid": "aae530f9-2b85-4a31-8f48-88a6cf4327bd", 00:10:37.084 "strip_size_kb": 64, 00:10:37.084 "state": "offline", 00:10:37.084 "raid_level": "raid0", 00:10:37.084 "superblock": false, 00:10:37.084 "num_base_bdevs": 4, 00:10:37.084 "num_base_bdevs_discovered": 3, 00:10:37.084 "num_base_bdevs_operational": 3, 00:10:37.084 "base_bdevs_list": [ 00:10:37.084 { 00:10:37.084 "name": null, 00:10:37.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.084 "is_configured": false, 00:10:37.084 "data_offset": 0, 00:10:37.084 "data_size": 65536 00:10:37.084 }, 00:10:37.084 { 00:10:37.084 "name": "BaseBdev2", 00:10:37.084 "uuid": "db63d6da-13bc-40c1-bd76-470ab6b5e483", 00:10:37.084 "is_configured": true, 00:10:37.084 "data_offset": 0, 00:10:37.084 "data_size": 65536 00:10:37.084 }, 00:10:37.084 { 00:10:37.084 "name": "BaseBdev3", 00:10:37.084 "uuid": "9c100a69-cdfa-4183-90b2-936c4555c73e", 00:10:37.084 "is_configured": true, 00:10:37.084 "data_offset": 0, 00:10:37.084 "data_size": 65536 00:10:37.084 }, 00:10:37.084 { 00:10:37.084 "name": "BaseBdev4", 00:10:37.084 "uuid": "40d1e5ee-e0c3-4d76-a61b-41c5fd22442f", 00:10:37.084 "is_configured": true, 00:10:37.084 "data_offset": 0, 00:10:37.084 "data_size": 65536 00:10:37.084 } 00:10:37.084 ] 00:10:37.084 }' 00:10:37.084 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.084 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.651 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.651 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.651 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.652 [2024-12-12 16:07:03.775012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.652 16:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.652 [2024-12-12 16:07:03.938465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.910 [2024-12-12 16:07:04.102390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:37.910 [2024-12-12 16:07:04.102542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.910 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.911 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.911 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.911 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.911 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.911 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 BaseBdev2 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 [ 00:10:38.170 { 00:10:38.170 "name": "BaseBdev2", 00:10:38.170 "aliases": [ 00:10:38.170 "71aeb815-933f-4d6a-bc26-cca07b0c3c0f" 00:10:38.170 ], 00:10:38.170 "product_name": "Malloc disk", 00:10:38.170 "block_size": 512, 00:10:38.170 "num_blocks": 65536, 00:10:38.170 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:38.170 "assigned_rate_limits": { 00:10:38.170 "rw_ios_per_sec": 0, 00:10:38.170 "rw_mbytes_per_sec": 0, 00:10:38.170 "r_mbytes_per_sec": 0, 00:10:38.170 "w_mbytes_per_sec": 0 00:10:38.170 }, 00:10:38.170 "claimed": false, 00:10:38.170 "zoned": false, 00:10:38.170 "supported_io_types": { 00:10:38.170 "read": true, 00:10:38.170 "write": true, 00:10:38.170 "unmap": true, 00:10:38.170 "flush": true, 00:10:38.170 "reset": true, 00:10:38.170 "nvme_admin": false, 00:10:38.170 "nvme_io": false, 00:10:38.170 "nvme_io_md": false, 00:10:38.170 "write_zeroes": true, 00:10:38.170 "zcopy": true, 00:10:38.170 "get_zone_info": false, 00:10:38.170 "zone_management": false, 00:10:38.170 "zone_append": false, 00:10:38.170 "compare": false, 00:10:38.170 "compare_and_write": false, 00:10:38.170 "abort": true, 00:10:38.170 "seek_hole": false, 00:10:38.170 "seek_data": false, 00:10:38.170 "copy": true, 00:10:38.170 "nvme_iov_md": false 00:10:38.170 }, 00:10:38.170 "memory_domains": [ 00:10:38.170 { 00:10:38.170 "dma_device_id": "system", 00:10:38.170 "dma_device_type": 1 00:10:38.170 }, 00:10:38.170 { 00:10:38.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.170 "dma_device_type": 2 00:10:38.170 } 00:10:38.170 ], 00:10:38.170 "driver_specific": {} 00:10:38.170 } 00:10:38.170 ] 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 BaseBdev3 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.170 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 [ 00:10:38.171 { 00:10:38.171 "name": "BaseBdev3", 00:10:38.171 "aliases": [ 00:10:38.171 "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b" 00:10:38.171 ], 00:10:38.171 "product_name": "Malloc disk", 00:10:38.171 "block_size": 512, 00:10:38.171 "num_blocks": 65536, 00:10:38.171 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:38.171 "assigned_rate_limits": { 00:10:38.171 "rw_ios_per_sec": 0, 00:10:38.171 "rw_mbytes_per_sec": 0, 00:10:38.171 "r_mbytes_per_sec": 0, 00:10:38.171 "w_mbytes_per_sec": 0 00:10:38.171 }, 00:10:38.171 "claimed": false, 00:10:38.171 "zoned": false, 00:10:38.171 "supported_io_types": { 00:10:38.171 "read": true, 00:10:38.171 "write": true, 00:10:38.171 "unmap": true, 00:10:38.171 "flush": true, 00:10:38.171 "reset": true, 00:10:38.171 "nvme_admin": false, 00:10:38.171 "nvme_io": false, 00:10:38.171 "nvme_io_md": false, 00:10:38.171 "write_zeroes": true, 00:10:38.171 "zcopy": true, 00:10:38.171 "get_zone_info": false, 00:10:38.171 "zone_management": false, 00:10:38.171 "zone_append": false, 00:10:38.171 "compare": false, 00:10:38.171 "compare_and_write": false, 00:10:38.171 "abort": true, 00:10:38.171 "seek_hole": false, 00:10:38.171 "seek_data": false, 00:10:38.171 "copy": true, 00:10:38.171 "nvme_iov_md": false 00:10:38.171 }, 00:10:38.171 "memory_domains": [ 00:10:38.171 { 00:10:38.171 "dma_device_id": "system", 00:10:38.171 "dma_device_type": 1 00:10:38.171 }, 00:10:38.171 { 00:10:38.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.171 "dma_device_type": 2 00:10:38.171 } 00:10:38.171 ], 00:10:38.171 "driver_specific": {} 00:10:38.171 } 00:10:38.171 ] 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 BaseBdev4 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.171 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.431 [ 00:10:38.431 { 00:10:38.431 "name": "BaseBdev4", 00:10:38.431 "aliases": [ 00:10:38.431 "070ef7ba-f888-464d-8f6e-96c2fb57c1d4" 00:10:38.431 ], 00:10:38.431 "product_name": "Malloc disk", 00:10:38.431 "block_size": 512, 00:10:38.431 "num_blocks": 65536, 00:10:38.431 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:38.431 "assigned_rate_limits": { 00:10:38.431 "rw_ios_per_sec": 0, 00:10:38.431 "rw_mbytes_per_sec": 0, 00:10:38.431 "r_mbytes_per_sec": 0, 00:10:38.431 "w_mbytes_per_sec": 0 00:10:38.431 }, 00:10:38.431 "claimed": false, 00:10:38.431 "zoned": false, 00:10:38.431 "supported_io_types": { 00:10:38.431 "read": true, 00:10:38.431 "write": true, 00:10:38.431 "unmap": true, 00:10:38.431 "flush": true, 00:10:38.431 "reset": true, 00:10:38.431 "nvme_admin": false, 00:10:38.431 "nvme_io": false, 00:10:38.431 "nvme_io_md": false, 00:10:38.431 "write_zeroes": true, 00:10:38.431 "zcopy": true, 00:10:38.431 "get_zone_info": false, 00:10:38.431 "zone_management": false, 00:10:38.431 "zone_append": false, 00:10:38.431 "compare": false, 00:10:38.431 "compare_and_write": false, 00:10:38.431 "abort": true, 00:10:38.431 "seek_hole": false, 00:10:38.431 "seek_data": false, 00:10:38.431 "copy": true, 00:10:38.431 "nvme_iov_md": false 00:10:38.431 }, 00:10:38.431 "memory_domains": [ 00:10:38.431 { 00:10:38.431 "dma_device_id": "system", 00:10:38.431 "dma_device_type": 1 00:10:38.431 }, 00:10:38.431 { 00:10:38.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.431 "dma_device_type": 2 00:10:38.431 } 00:10:38.431 ], 00:10:38.431 "driver_specific": {} 00:10:38.431 } 00:10:38.431 ] 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.431 [2024-12-12 16:07:04.545723] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.431 [2024-12-12 16:07:04.545854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.431 [2024-12-12 16:07:04.545907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.431 [2024-12-12 16:07:04.547985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.431 [2024-12-12 16:07:04.548082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.431 "name": "Existed_Raid", 00:10:38.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.431 "strip_size_kb": 64, 00:10:38.431 "state": "configuring", 00:10:38.431 "raid_level": "raid0", 00:10:38.431 "superblock": false, 00:10:38.431 "num_base_bdevs": 4, 00:10:38.431 "num_base_bdevs_discovered": 3, 00:10:38.431 "num_base_bdevs_operational": 4, 00:10:38.431 "base_bdevs_list": [ 00:10:38.431 { 00:10:38.431 "name": "BaseBdev1", 00:10:38.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.431 "is_configured": false, 00:10:38.431 "data_offset": 0, 00:10:38.431 "data_size": 0 00:10:38.431 }, 00:10:38.431 { 00:10:38.431 "name": "BaseBdev2", 00:10:38.431 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:38.431 "is_configured": true, 00:10:38.431 "data_offset": 0, 00:10:38.431 "data_size": 65536 00:10:38.431 }, 00:10:38.431 { 00:10:38.431 "name": "BaseBdev3", 00:10:38.431 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:38.431 "is_configured": true, 00:10:38.431 "data_offset": 0, 00:10:38.431 "data_size": 65536 00:10:38.431 }, 00:10:38.431 { 00:10:38.431 "name": "BaseBdev4", 00:10:38.431 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:38.431 "is_configured": true, 00:10:38.431 "data_offset": 0, 00:10:38.431 "data_size": 65536 00:10:38.431 } 00:10:38.431 ] 00:10:38.431 }' 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.431 16:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.691 [2024-12-12 16:07:05.020962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.691 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.950 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.950 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.950 "name": "Existed_Raid", 00:10:38.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.950 "strip_size_kb": 64, 00:10:38.950 "state": "configuring", 00:10:38.950 "raid_level": "raid0", 00:10:38.950 "superblock": false, 00:10:38.950 "num_base_bdevs": 4, 00:10:38.950 "num_base_bdevs_discovered": 2, 00:10:38.950 "num_base_bdevs_operational": 4, 00:10:38.950 "base_bdevs_list": [ 00:10:38.950 { 00:10:38.950 "name": "BaseBdev1", 00:10:38.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.950 "is_configured": false, 00:10:38.950 "data_offset": 0, 00:10:38.950 "data_size": 0 00:10:38.950 }, 00:10:38.950 { 00:10:38.950 "name": null, 00:10:38.950 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:38.950 "is_configured": false, 00:10:38.950 "data_offset": 0, 00:10:38.950 "data_size": 65536 00:10:38.950 }, 00:10:38.950 { 00:10:38.950 "name": "BaseBdev3", 00:10:38.950 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:38.950 "is_configured": true, 00:10:38.950 "data_offset": 0, 00:10:38.950 "data_size": 65536 00:10:38.950 }, 00:10:38.950 { 00:10:38.950 "name": "BaseBdev4", 00:10:38.950 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:38.950 "is_configured": true, 00:10:38.950 "data_offset": 0, 00:10:38.950 "data_size": 65536 00:10:38.950 } 00:10:38.950 ] 00:10:38.950 }' 00:10:38.950 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.950 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 [2024-12-12 16:07:05.507424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.210 BaseBdev1 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 [ 00:10:39.210 { 00:10:39.210 "name": "BaseBdev1", 00:10:39.210 "aliases": [ 00:10:39.210 "935f60c5-7644-4c2f-a12c-231460e449e2" 00:10:39.210 ], 00:10:39.210 "product_name": "Malloc disk", 00:10:39.210 "block_size": 512, 00:10:39.210 "num_blocks": 65536, 00:10:39.210 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:39.210 "assigned_rate_limits": { 00:10:39.210 "rw_ios_per_sec": 0, 00:10:39.210 "rw_mbytes_per_sec": 0, 00:10:39.210 "r_mbytes_per_sec": 0, 00:10:39.210 "w_mbytes_per_sec": 0 00:10:39.210 }, 00:10:39.210 "claimed": true, 00:10:39.210 "claim_type": "exclusive_write", 00:10:39.210 "zoned": false, 00:10:39.210 "supported_io_types": { 00:10:39.210 "read": true, 00:10:39.210 "write": true, 00:10:39.210 "unmap": true, 00:10:39.210 "flush": true, 00:10:39.210 "reset": true, 00:10:39.210 "nvme_admin": false, 00:10:39.210 "nvme_io": false, 00:10:39.210 "nvme_io_md": false, 00:10:39.210 "write_zeroes": true, 00:10:39.210 "zcopy": true, 00:10:39.210 "get_zone_info": false, 00:10:39.210 "zone_management": false, 00:10:39.210 "zone_append": false, 00:10:39.210 "compare": false, 00:10:39.210 "compare_and_write": false, 00:10:39.210 "abort": true, 00:10:39.210 "seek_hole": false, 00:10:39.210 "seek_data": false, 00:10:39.210 "copy": true, 00:10:39.210 "nvme_iov_md": false 00:10:39.210 }, 00:10:39.210 "memory_domains": [ 00:10:39.210 { 00:10:39.210 "dma_device_id": "system", 00:10:39.210 "dma_device_type": 1 00:10:39.210 }, 00:10:39.210 { 00:10:39.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.210 "dma_device_type": 2 00:10:39.210 } 00:10:39.210 ], 00:10:39.210 "driver_specific": {} 00:10:39.210 } 00:10:39.210 ] 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.470 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.470 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.470 "name": "Existed_Raid", 00:10:39.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.470 "strip_size_kb": 64, 00:10:39.470 "state": "configuring", 00:10:39.470 "raid_level": "raid0", 00:10:39.470 "superblock": false, 00:10:39.470 "num_base_bdevs": 4, 00:10:39.470 "num_base_bdevs_discovered": 3, 00:10:39.470 "num_base_bdevs_operational": 4, 00:10:39.470 "base_bdevs_list": [ 00:10:39.470 { 00:10:39.470 "name": "BaseBdev1", 00:10:39.470 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:39.470 "is_configured": true, 00:10:39.470 "data_offset": 0, 00:10:39.470 "data_size": 65536 00:10:39.470 }, 00:10:39.470 { 00:10:39.470 "name": null, 00:10:39.470 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:39.470 "is_configured": false, 00:10:39.470 "data_offset": 0, 00:10:39.470 "data_size": 65536 00:10:39.470 }, 00:10:39.470 { 00:10:39.470 "name": "BaseBdev3", 00:10:39.470 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:39.470 "is_configured": true, 00:10:39.470 "data_offset": 0, 00:10:39.470 "data_size": 65536 00:10:39.470 }, 00:10:39.470 { 00:10:39.470 "name": "BaseBdev4", 00:10:39.470 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:39.470 "is_configured": true, 00:10:39.470 "data_offset": 0, 00:10:39.470 "data_size": 65536 00:10:39.470 } 00:10:39.470 ] 00:10:39.470 }' 00:10:39.470 16:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.470 16:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.729 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.989 [2024-12-12 16:07:06.082532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.989 "name": "Existed_Raid", 00:10:39.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.989 "strip_size_kb": 64, 00:10:39.989 "state": "configuring", 00:10:39.989 "raid_level": "raid0", 00:10:39.989 "superblock": false, 00:10:39.989 "num_base_bdevs": 4, 00:10:39.989 "num_base_bdevs_discovered": 2, 00:10:39.989 "num_base_bdevs_operational": 4, 00:10:39.989 "base_bdevs_list": [ 00:10:39.989 { 00:10:39.989 "name": "BaseBdev1", 00:10:39.989 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:39.989 "is_configured": true, 00:10:39.989 "data_offset": 0, 00:10:39.989 "data_size": 65536 00:10:39.989 }, 00:10:39.989 { 00:10:39.989 "name": null, 00:10:39.989 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:39.989 "is_configured": false, 00:10:39.989 "data_offset": 0, 00:10:39.989 "data_size": 65536 00:10:39.989 }, 00:10:39.989 { 00:10:39.989 "name": null, 00:10:39.989 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:39.989 "is_configured": false, 00:10:39.989 "data_offset": 0, 00:10:39.989 "data_size": 65536 00:10:39.989 }, 00:10:39.989 { 00:10:39.989 "name": "BaseBdev4", 00:10:39.989 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:39.989 "is_configured": true, 00:10:39.989 "data_offset": 0, 00:10:39.989 "data_size": 65536 00:10:39.989 } 00:10:39.989 ] 00:10:39.989 }' 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.989 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.248 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.248 [2024-12-12 16:07:06.593625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.508 "name": "Existed_Raid", 00:10:40.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.508 "strip_size_kb": 64, 00:10:40.508 "state": "configuring", 00:10:40.508 "raid_level": "raid0", 00:10:40.508 "superblock": false, 00:10:40.508 "num_base_bdevs": 4, 00:10:40.508 "num_base_bdevs_discovered": 3, 00:10:40.508 "num_base_bdevs_operational": 4, 00:10:40.508 "base_bdevs_list": [ 00:10:40.508 { 00:10:40.508 "name": "BaseBdev1", 00:10:40.508 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:40.508 "is_configured": true, 00:10:40.508 "data_offset": 0, 00:10:40.508 "data_size": 65536 00:10:40.508 }, 00:10:40.508 { 00:10:40.508 "name": null, 00:10:40.508 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:40.508 "is_configured": false, 00:10:40.508 "data_offset": 0, 00:10:40.508 "data_size": 65536 00:10:40.508 }, 00:10:40.508 { 00:10:40.508 "name": "BaseBdev3", 00:10:40.508 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:40.508 "is_configured": true, 00:10:40.508 "data_offset": 0, 00:10:40.508 "data_size": 65536 00:10:40.508 }, 00:10:40.508 { 00:10:40.508 "name": "BaseBdev4", 00:10:40.508 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:40.508 "is_configured": true, 00:10:40.508 "data_offset": 0, 00:10:40.508 "data_size": 65536 00:10:40.508 } 00:10:40.508 ] 00:10:40.508 }' 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.508 16:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.767 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.767 [2024-12-12 16:07:07.100881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.026 "name": "Existed_Raid", 00:10:41.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.026 "strip_size_kb": 64, 00:10:41.026 "state": "configuring", 00:10:41.026 "raid_level": "raid0", 00:10:41.026 "superblock": false, 00:10:41.026 "num_base_bdevs": 4, 00:10:41.026 "num_base_bdevs_discovered": 2, 00:10:41.026 "num_base_bdevs_operational": 4, 00:10:41.026 "base_bdevs_list": [ 00:10:41.026 { 00:10:41.026 "name": null, 00:10:41.026 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:41.026 "is_configured": false, 00:10:41.026 "data_offset": 0, 00:10:41.026 "data_size": 65536 00:10:41.026 }, 00:10:41.026 { 00:10:41.026 "name": null, 00:10:41.026 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:41.026 "is_configured": false, 00:10:41.026 "data_offset": 0, 00:10:41.026 "data_size": 65536 00:10:41.026 }, 00:10:41.026 { 00:10:41.026 "name": "BaseBdev3", 00:10:41.026 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:41.026 "is_configured": true, 00:10:41.026 "data_offset": 0, 00:10:41.026 "data_size": 65536 00:10:41.026 }, 00:10:41.026 { 00:10:41.026 "name": "BaseBdev4", 00:10:41.026 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:41.026 "is_configured": true, 00:10:41.026 "data_offset": 0, 00:10:41.026 "data_size": 65536 00:10:41.026 } 00:10:41.026 ] 00:10:41.026 }' 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.026 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.594 [2024-12-12 16:07:07.678224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.594 "name": "Existed_Raid", 00:10:41.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.594 "strip_size_kb": 64, 00:10:41.594 "state": "configuring", 00:10:41.594 "raid_level": "raid0", 00:10:41.594 "superblock": false, 00:10:41.594 "num_base_bdevs": 4, 00:10:41.594 "num_base_bdevs_discovered": 3, 00:10:41.594 "num_base_bdevs_operational": 4, 00:10:41.594 "base_bdevs_list": [ 00:10:41.594 { 00:10:41.594 "name": null, 00:10:41.594 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:41.594 "is_configured": false, 00:10:41.594 "data_offset": 0, 00:10:41.594 "data_size": 65536 00:10:41.594 }, 00:10:41.594 { 00:10:41.594 "name": "BaseBdev2", 00:10:41.594 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:41.594 "is_configured": true, 00:10:41.594 "data_offset": 0, 00:10:41.594 "data_size": 65536 00:10:41.594 }, 00:10:41.594 { 00:10:41.594 "name": "BaseBdev3", 00:10:41.594 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:41.594 "is_configured": true, 00:10:41.594 "data_offset": 0, 00:10:41.594 "data_size": 65536 00:10:41.594 }, 00:10:41.594 { 00:10:41.594 "name": "BaseBdev4", 00:10:41.594 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:41.594 "is_configured": true, 00:10:41.594 "data_offset": 0, 00:10:41.594 "data_size": 65536 00:10:41.594 } 00:10:41.594 ] 00:10:41.594 }' 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.594 16:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.853 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 935f60c5-7644-4c2f-a12c-231460e449e2 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 [2024-12-12 16:07:08.284440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:42.112 [2024-12-12 16:07:08.284593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:42.112 [2024-12-12 16:07:08.284619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:42.112 [2024-12-12 16:07:08.284980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:42.112 [2024-12-12 16:07:08.285189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:42.112 [2024-12-12 16:07:08.285231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:42.112 [2024-12-12 16:07:08.285540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.112 NewBaseBdev 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.112 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.113 [ 00:10:42.113 { 00:10:42.113 "name": "NewBaseBdev", 00:10:42.113 "aliases": [ 00:10:42.113 "935f60c5-7644-4c2f-a12c-231460e449e2" 00:10:42.113 ], 00:10:42.113 "product_name": "Malloc disk", 00:10:42.113 "block_size": 512, 00:10:42.113 "num_blocks": 65536, 00:10:42.113 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:42.113 "assigned_rate_limits": { 00:10:42.113 "rw_ios_per_sec": 0, 00:10:42.113 "rw_mbytes_per_sec": 0, 00:10:42.113 "r_mbytes_per_sec": 0, 00:10:42.113 "w_mbytes_per_sec": 0 00:10:42.113 }, 00:10:42.113 "claimed": true, 00:10:42.113 "claim_type": "exclusive_write", 00:10:42.113 "zoned": false, 00:10:42.113 "supported_io_types": { 00:10:42.113 "read": true, 00:10:42.113 "write": true, 00:10:42.113 "unmap": true, 00:10:42.113 "flush": true, 00:10:42.113 "reset": true, 00:10:42.113 "nvme_admin": false, 00:10:42.113 "nvme_io": false, 00:10:42.113 "nvme_io_md": false, 00:10:42.113 "write_zeroes": true, 00:10:42.113 "zcopy": true, 00:10:42.113 "get_zone_info": false, 00:10:42.113 "zone_management": false, 00:10:42.113 "zone_append": false, 00:10:42.113 "compare": false, 00:10:42.113 "compare_and_write": false, 00:10:42.113 "abort": true, 00:10:42.113 "seek_hole": false, 00:10:42.113 "seek_data": false, 00:10:42.113 "copy": true, 00:10:42.113 "nvme_iov_md": false 00:10:42.113 }, 00:10:42.113 "memory_domains": [ 00:10:42.113 { 00:10:42.113 "dma_device_id": "system", 00:10:42.113 "dma_device_type": 1 00:10:42.113 }, 00:10:42.113 { 00:10:42.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.113 "dma_device_type": 2 00:10:42.113 } 00:10:42.113 ], 00:10:42.113 "driver_specific": {} 00:10:42.113 } 00:10:42.113 ] 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.113 "name": "Existed_Raid", 00:10:42.113 "uuid": "9f582b13-d7af-40fb-b5f7-9409ffa040f4", 00:10:42.113 "strip_size_kb": 64, 00:10:42.113 "state": "online", 00:10:42.113 "raid_level": "raid0", 00:10:42.113 "superblock": false, 00:10:42.113 "num_base_bdevs": 4, 00:10:42.113 "num_base_bdevs_discovered": 4, 00:10:42.113 "num_base_bdevs_operational": 4, 00:10:42.113 "base_bdevs_list": [ 00:10:42.113 { 00:10:42.113 "name": "NewBaseBdev", 00:10:42.113 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:42.113 "is_configured": true, 00:10:42.113 "data_offset": 0, 00:10:42.113 "data_size": 65536 00:10:42.113 }, 00:10:42.113 { 00:10:42.113 "name": "BaseBdev2", 00:10:42.113 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:42.113 "is_configured": true, 00:10:42.113 "data_offset": 0, 00:10:42.113 "data_size": 65536 00:10:42.113 }, 00:10:42.113 { 00:10:42.113 "name": "BaseBdev3", 00:10:42.113 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:42.113 "is_configured": true, 00:10:42.113 "data_offset": 0, 00:10:42.113 "data_size": 65536 00:10:42.113 }, 00:10:42.113 { 00:10:42.113 "name": "BaseBdev4", 00:10:42.113 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:42.113 "is_configured": true, 00:10:42.113 "data_offset": 0, 00:10:42.113 "data_size": 65536 00:10:42.113 } 00:10:42.113 ] 00:10:42.113 }' 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.113 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.681 [2024-12-12 16:07:08.784025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.681 "name": "Existed_Raid", 00:10:42.681 "aliases": [ 00:10:42.681 "9f582b13-d7af-40fb-b5f7-9409ffa040f4" 00:10:42.681 ], 00:10:42.681 "product_name": "Raid Volume", 00:10:42.681 "block_size": 512, 00:10:42.681 "num_blocks": 262144, 00:10:42.681 "uuid": "9f582b13-d7af-40fb-b5f7-9409ffa040f4", 00:10:42.681 "assigned_rate_limits": { 00:10:42.681 "rw_ios_per_sec": 0, 00:10:42.681 "rw_mbytes_per_sec": 0, 00:10:42.681 "r_mbytes_per_sec": 0, 00:10:42.681 "w_mbytes_per_sec": 0 00:10:42.681 }, 00:10:42.681 "claimed": false, 00:10:42.681 "zoned": false, 00:10:42.681 "supported_io_types": { 00:10:42.681 "read": true, 00:10:42.681 "write": true, 00:10:42.681 "unmap": true, 00:10:42.681 "flush": true, 00:10:42.681 "reset": true, 00:10:42.681 "nvme_admin": false, 00:10:42.681 "nvme_io": false, 00:10:42.681 "nvme_io_md": false, 00:10:42.681 "write_zeroes": true, 00:10:42.681 "zcopy": false, 00:10:42.681 "get_zone_info": false, 00:10:42.681 "zone_management": false, 00:10:42.681 "zone_append": false, 00:10:42.681 "compare": false, 00:10:42.681 "compare_and_write": false, 00:10:42.681 "abort": false, 00:10:42.681 "seek_hole": false, 00:10:42.681 "seek_data": false, 00:10:42.681 "copy": false, 00:10:42.681 "nvme_iov_md": false 00:10:42.681 }, 00:10:42.681 "memory_domains": [ 00:10:42.681 { 00:10:42.681 "dma_device_id": "system", 00:10:42.681 "dma_device_type": 1 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.681 "dma_device_type": 2 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "dma_device_id": "system", 00:10:42.681 "dma_device_type": 1 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.681 "dma_device_type": 2 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "dma_device_id": "system", 00:10:42.681 "dma_device_type": 1 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.681 "dma_device_type": 2 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "dma_device_id": "system", 00:10:42.681 "dma_device_type": 1 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.681 "dma_device_type": 2 00:10:42.681 } 00:10:42.681 ], 00:10:42.681 "driver_specific": { 00:10:42.681 "raid": { 00:10:42.681 "uuid": "9f582b13-d7af-40fb-b5f7-9409ffa040f4", 00:10:42.681 "strip_size_kb": 64, 00:10:42.681 "state": "online", 00:10:42.681 "raid_level": "raid0", 00:10:42.681 "superblock": false, 00:10:42.681 "num_base_bdevs": 4, 00:10:42.681 "num_base_bdevs_discovered": 4, 00:10:42.681 "num_base_bdevs_operational": 4, 00:10:42.681 "base_bdevs_list": [ 00:10:42.681 { 00:10:42.681 "name": "NewBaseBdev", 00:10:42.681 "uuid": "935f60c5-7644-4c2f-a12c-231460e449e2", 00:10:42.681 "is_configured": true, 00:10:42.681 "data_offset": 0, 00:10:42.681 "data_size": 65536 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "name": "BaseBdev2", 00:10:42.681 "uuid": "71aeb815-933f-4d6a-bc26-cca07b0c3c0f", 00:10:42.681 "is_configured": true, 00:10:42.681 "data_offset": 0, 00:10:42.681 "data_size": 65536 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "name": "BaseBdev3", 00:10:42.681 "uuid": "5a71d785-ce71-4b5b-bd1d-1b4c4c12b37b", 00:10:42.681 "is_configured": true, 00:10:42.681 "data_offset": 0, 00:10:42.681 "data_size": 65536 00:10:42.681 }, 00:10:42.681 { 00:10:42.681 "name": "BaseBdev4", 00:10:42.681 "uuid": "070ef7ba-f888-464d-8f6e-96c2fb57c1d4", 00:10:42.681 "is_configured": true, 00:10:42.681 "data_offset": 0, 00:10:42.681 "data_size": 65536 00:10:42.681 } 00:10:42.681 ] 00:10:42.681 } 00:10:42.681 } 00:10:42.681 }' 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:42.681 BaseBdev2 00:10:42.681 BaseBdev3 00:10:42.681 BaseBdev4' 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.681 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.682 16:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.682 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.682 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.682 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.682 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.682 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.944 [2024-12-12 16:07:09.103054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.944 [2024-12-12 16:07:09.103094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.944 [2024-12-12 16:07:09.103176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.944 [2024-12-12 16:07:09.103251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.944 [2024-12-12 16:07:09.103261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71400 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71400 ']' 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71400 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71400 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71400' 00:10:42.944 killing process with pid 71400 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71400 00:10:42.944 [2024-12-12 16:07:09.154333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.944 16:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71400 00:10:43.515 [2024-12-12 16:07:09.593176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.898 ************************************ 00:10:44.898 END TEST raid_state_function_test 00:10:44.898 ************************************ 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:44.898 00:10:44.898 real 0m11.894s 00:10:44.898 user 0m18.581s 00:10:44.898 sys 0m2.211s 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.898 16:07:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:44.898 16:07:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:44.898 16:07:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.898 16:07:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.898 ************************************ 00:10:44.898 START TEST raid_state_function_test_sb 00:10:44.898 ************************************ 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:44.898 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72077 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:44.899 Process raid pid: 72077 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72077' 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72077 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72077 ']' 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.899 16:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.899 [2024-12-12 16:07:11.041309] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:44.899 [2024-12-12 16:07:11.041457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.899 [2024-12-12 16:07:11.214849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.158 [2024-12-12 16:07:11.367336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.417 [2024-12-12 16:07:11.626911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.417 [2024-12-12 16:07:11.626974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.677 [2024-12-12 16:07:11.871417] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.677 [2024-12-12 16:07:11.871506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.677 [2024-12-12 16:07:11.871517] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.677 [2024-12-12 16:07:11.871528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.677 [2024-12-12 16:07:11.871535] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.677 [2024-12-12 16:07:11.871544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.677 [2024-12-12 16:07:11.871550] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.677 [2024-12-12 16:07:11.871560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.677 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.678 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.678 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.678 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.678 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.678 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.678 "name": "Existed_Raid", 00:10:45.678 "uuid": "f790a79e-89fe-49b0-a15d-18aae625cf7f", 00:10:45.678 "strip_size_kb": 64, 00:10:45.678 "state": "configuring", 00:10:45.678 "raid_level": "raid0", 00:10:45.678 "superblock": true, 00:10:45.678 "num_base_bdevs": 4, 00:10:45.678 "num_base_bdevs_discovered": 0, 00:10:45.678 "num_base_bdevs_operational": 4, 00:10:45.678 "base_bdevs_list": [ 00:10:45.678 { 00:10:45.678 "name": "BaseBdev1", 00:10:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.678 "is_configured": false, 00:10:45.678 "data_offset": 0, 00:10:45.678 "data_size": 0 00:10:45.678 }, 00:10:45.678 { 00:10:45.678 "name": "BaseBdev2", 00:10:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.678 "is_configured": false, 00:10:45.678 "data_offset": 0, 00:10:45.678 "data_size": 0 00:10:45.678 }, 00:10:45.678 { 00:10:45.678 "name": "BaseBdev3", 00:10:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.678 "is_configured": false, 00:10:45.678 "data_offset": 0, 00:10:45.678 "data_size": 0 00:10:45.678 }, 00:10:45.678 { 00:10:45.678 "name": "BaseBdev4", 00:10:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.678 "is_configured": false, 00:10:45.678 "data_offset": 0, 00:10:45.678 "data_size": 0 00:10:45.678 } 00:10:45.678 ] 00:10:45.678 }' 00:10:45.678 16:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.678 16:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 [2024-12-12 16:07:12.298623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.247 [2024-12-12 16:07:12.298776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 [2024-12-12 16:07:12.310571] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.247 [2024-12-12 16:07:12.310669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.247 [2024-12-12 16:07:12.310696] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.247 [2024-12-12 16:07:12.310719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.247 [2024-12-12 16:07:12.310737] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.247 [2024-12-12 16:07:12.310758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.247 [2024-12-12 16:07:12.310776] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.247 [2024-12-12 16:07:12.310797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 [2024-12-12 16:07:12.366655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.247 BaseBdev1 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 [ 00:10:46.247 { 00:10:46.247 "name": "BaseBdev1", 00:10:46.247 "aliases": [ 00:10:46.247 "103c8851-30c3-429b-ba66-c4e13832fd41" 00:10:46.247 ], 00:10:46.247 "product_name": "Malloc disk", 00:10:46.247 "block_size": 512, 00:10:46.247 "num_blocks": 65536, 00:10:46.247 "uuid": "103c8851-30c3-429b-ba66-c4e13832fd41", 00:10:46.247 "assigned_rate_limits": { 00:10:46.247 "rw_ios_per_sec": 0, 00:10:46.247 "rw_mbytes_per_sec": 0, 00:10:46.247 "r_mbytes_per_sec": 0, 00:10:46.247 "w_mbytes_per_sec": 0 00:10:46.247 }, 00:10:46.247 "claimed": true, 00:10:46.247 "claim_type": "exclusive_write", 00:10:46.247 "zoned": false, 00:10:46.247 "supported_io_types": { 00:10:46.247 "read": true, 00:10:46.247 "write": true, 00:10:46.247 "unmap": true, 00:10:46.247 "flush": true, 00:10:46.247 "reset": true, 00:10:46.247 "nvme_admin": false, 00:10:46.247 "nvme_io": false, 00:10:46.247 "nvme_io_md": false, 00:10:46.247 "write_zeroes": true, 00:10:46.247 "zcopy": true, 00:10:46.247 "get_zone_info": false, 00:10:46.247 "zone_management": false, 00:10:46.247 "zone_append": false, 00:10:46.247 "compare": false, 00:10:46.247 "compare_and_write": false, 00:10:46.247 "abort": true, 00:10:46.247 "seek_hole": false, 00:10:46.247 "seek_data": false, 00:10:46.247 "copy": true, 00:10:46.247 "nvme_iov_md": false 00:10:46.247 }, 00:10:46.247 "memory_domains": [ 00:10:46.247 { 00:10:46.247 "dma_device_id": "system", 00:10:46.247 "dma_device_type": 1 00:10:46.247 }, 00:10:46.247 { 00:10:46.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.247 "dma_device_type": 2 00:10:46.247 } 00:10:46.247 ], 00:10:46.247 "driver_specific": {} 00:10:46.247 } 00:10:46.247 ] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.247 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.247 "name": "Existed_Raid", 00:10:46.247 "uuid": "c75a82df-8bcd-4484-b4a7-6af879e7660d", 00:10:46.247 "strip_size_kb": 64, 00:10:46.247 "state": "configuring", 00:10:46.247 "raid_level": "raid0", 00:10:46.247 "superblock": true, 00:10:46.247 "num_base_bdevs": 4, 00:10:46.247 "num_base_bdevs_discovered": 1, 00:10:46.247 "num_base_bdevs_operational": 4, 00:10:46.247 "base_bdevs_list": [ 00:10:46.247 { 00:10:46.247 "name": "BaseBdev1", 00:10:46.247 "uuid": "103c8851-30c3-429b-ba66-c4e13832fd41", 00:10:46.247 "is_configured": true, 00:10:46.247 "data_offset": 2048, 00:10:46.247 "data_size": 63488 00:10:46.247 }, 00:10:46.247 { 00:10:46.247 "name": "BaseBdev2", 00:10:46.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.247 "is_configured": false, 00:10:46.247 "data_offset": 0, 00:10:46.247 "data_size": 0 00:10:46.247 }, 00:10:46.247 { 00:10:46.247 "name": "BaseBdev3", 00:10:46.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.247 "is_configured": false, 00:10:46.247 "data_offset": 0, 00:10:46.247 "data_size": 0 00:10:46.247 }, 00:10:46.247 { 00:10:46.247 "name": "BaseBdev4", 00:10:46.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.247 "is_configured": false, 00:10:46.248 "data_offset": 0, 00:10:46.248 "data_size": 0 00:10:46.248 } 00:10:46.248 ] 00:10:46.248 }' 00:10:46.248 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.248 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.816 [2024-12-12 16:07:12.873903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.816 [2024-12-12 16:07:12.873991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.816 [2024-12-12 16:07:12.885917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.816 [2024-12-12 16:07:12.888119] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.816 [2024-12-12 16:07:12.888198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.816 [2024-12-12 16:07:12.888227] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.816 [2024-12-12 16:07:12.888251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.816 [2024-12-12 16:07:12.888270] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.816 [2024-12-12 16:07:12.888289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.816 "name": "Existed_Raid", 00:10:46.816 "uuid": "bd4bb7b6-b8a2-4254-999c-015ef9f469cd", 00:10:46.816 "strip_size_kb": 64, 00:10:46.816 "state": "configuring", 00:10:46.816 "raid_level": "raid0", 00:10:46.816 "superblock": true, 00:10:46.816 "num_base_bdevs": 4, 00:10:46.816 "num_base_bdevs_discovered": 1, 00:10:46.816 "num_base_bdevs_operational": 4, 00:10:46.816 "base_bdevs_list": [ 00:10:46.816 { 00:10:46.816 "name": "BaseBdev1", 00:10:46.816 "uuid": "103c8851-30c3-429b-ba66-c4e13832fd41", 00:10:46.816 "is_configured": true, 00:10:46.816 "data_offset": 2048, 00:10:46.816 "data_size": 63488 00:10:46.816 }, 00:10:46.816 { 00:10:46.816 "name": "BaseBdev2", 00:10:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.816 "is_configured": false, 00:10:46.816 "data_offset": 0, 00:10:46.816 "data_size": 0 00:10:46.816 }, 00:10:46.816 { 00:10:46.816 "name": "BaseBdev3", 00:10:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.816 "is_configured": false, 00:10:46.816 "data_offset": 0, 00:10:46.816 "data_size": 0 00:10:46.816 }, 00:10:46.816 { 00:10:46.816 "name": "BaseBdev4", 00:10:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.816 "is_configured": false, 00:10:46.816 "data_offset": 0, 00:10:46.816 "data_size": 0 00:10:46.816 } 00:10:46.816 ] 00:10:46.816 }' 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.816 16:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 [2024-12-12 16:07:13.398724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.076 BaseBdev2 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.076 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 [ 00:10:47.076 { 00:10:47.076 "name": "BaseBdev2", 00:10:47.076 "aliases": [ 00:10:47.076 "2f38e6f6-333a-4410-9931-890e3b50a139" 00:10:47.076 ], 00:10:47.076 "product_name": "Malloc disk", 00:10:47.076 "block_size": 512, 00:10:47.076 "num_blocks": 65536, 00:10:47.076 "uuid": "2f38e6f6-333a-4410-9931-890e3b50a139", 00:10:47.076 "assigned_rate_limits": { 00:10:47.337 "rw_ios_per_sec": 0, 00:10:47.337 "rw_mbytes_per_sec": 0, 00:10:47.337 "r_mbytes_per_sec": 0, 00:10:47.337 "w_mbytes_per_sec": 0 00:10:47.337 }, 00:10:47.337 "claimed": true, 00:10:47.337 "claim_type": "exclusive_write", 00:10:47.337 "zoned": false, 00:10:47.337 "supported_io_types": { 00:10:47.337 "read": true, 00:10:47.337 "write": true, 00:10:47.337 "unmap": true, 00:10:47.337 "flush": true, 00:10:47.337 "reset": true, 00:10:47.337 "nvme_admin": false, 00:10:47.337 "nvme_io": false, 00:10:47.337 "nvme_io_md": false, 00:10:47.337 "write_zeroes": true, 00:10:47.337 "zcopy": true, 00:10:47.337 "get_zone_info": false, 00:10:47.337 "zone_management": false, 00:10:47.337 "zone_append": false, 00:10:47.337 "compare": false, 00:10:47.337 "compare_and_write": false, 00:10:47.337 "abort": true, 00:10:47.337 "seek_hole": false, 00:10:47.337 "seek_data": false, 00:10:47.337 "copy": true, 00:10:47.337 "nvme_iov_md": false 00:10:47.337 }, 00:10:47.337 "memory_domains": [ 00:10:47.337 { 00:10:47.337 "dma_device_id": "system", 00:10:47.337 "dma_device_type": 1 00:10:47.337 }, 00:10:47.337 { 00:10:47.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.337 "dma_device_type": 2 00:10:47.337 } 00:10:47.337 ], 00:10:47.337 "driver_specific": {} 00:10:47.337 } 00:10:47.337 ] 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.337 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.337 "name": "Existed_Raid", 00:10:47.337 "uuid": "bd4bb7b6-b8a2-4254-999c-015ef9f469cd", 00:10:47.337 "strip_size_kb": 64, 00:10:47.337 "state": "configuring", 00:10:47.337 "raid_level": "raid0", 00:10:47.337 "superblock": true, 00:10:47.337 "num_base_bdevs": 4, 00:10:47.337 "num_base_bdevs_discovered": 2, 00:10:47.337 "num_base_bdevs_operational": 4, 00:10:47.337 "base_bdevs_list": [ 00:10:47.337 { 00:10:47.337 "name": "BaseBdev1", 00:10:47.337 "uuid": "103c8851-30c3-429b-ba66-c4e13832fd41", 00:10:47.337 "is_configured": true, 00:10:47.337 "data_offset": 2048, 00:10:47.337 "data_size": 63488 00:10:47.337 }, 00:10:47.337 { 00:10:47.337 "name": "BaseBdev2", 00:10:47.337 "uuid": "2f38e6f6-333a-4410-9931-890e3b50a139", 00:10:47.337 "is_configured": true, 00:10:47.337 "data_offset": 2048, 00:10:47.338 "data_size": 63488 00:10:47.338 }, 00:10:47.338 { 00:10:47.338 "name": "BaseBdev3", 00:10:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.338 "is_configured": false, 00:10:47.338 "data_offset": 0, 00:10:47.338 "data_size": 0 00:10:47.338 }, 00:10:47.338 { 00:10:47.338 "name": "BaseBdev4", 00:10:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.338 "is_configured": false, 00:10:47.338 "data_offset": 0, 00:10:47.338 "data_size": 0 00:10:47.338 } 00:10:47.338 ] 00:10:47.338 }' 00:10:47.338 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.338 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.599 [2024-12-12 16:07:13.922547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.599 BaseBdev3 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.599 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.599 [ 00:10:47.599 { 00:10:47.599 "name": "BaseBdev3", 00:10:47.599 "aliases": [ 00:10:47.599 "0c8a6e8a-8ebd-44bb-8a88-7de01aa04e39" 00:10:47.860 ], 00:10:47.860 "product_name": "Malloc disk", 00:10:47.860 "block_size": 512, 00:10:47.860 "num_blocks": 65536, 00:10:47.860 "uuid": "0c8a6e8a-8ebd-44bb-8a88-7de01aa04e39", 00:10:47.860 "assigned_rate_limits": { 00:10:47.860 "rw_ios_per_sec": 0, 00:10:47.860 "rw_mbytes_per_sec": 0, 00:10:47.860 "r_mbytes_per_sec": 0, 00:10:47.860 "w_mbytes_per_sec": 0 00:10:47.860 }, 00:10:47.860 "claimed": true, 00:10:47.860 "claim_type": "exclusive_write", 00:10:47.860 "zoned": false, 00:10:47.860 "supported_io_types": { 00:10:47.860 "read": true, 00:10:47.860 "write": true, 00:10:47.860 "unmap": true, 00:10:47.860 "flush": true, 00:10:47.860 "reset": true, 00:10:47.860 "nvme_admin": false, 00:10:47.860 "nvme_io": false, 00:10:47.860 "nvme_io_md": false, 00:10:47.860 "write_zeroes": true, 00:10:47.860 "zcopy": true, 00:10:47.860 "get_zone_info": false, 00:10:47.860 "zone_management": false, 00:10:47.860 "zone_append": false, 00:10:47.860 "compare": false, 00:10:47.860 "compare_and_write": false, 00:10:47.860 "abort": true, 00:10:47.860 "seek_hole": false, 00:10:47.860 "seek_data": false, 00:10:47.860 "copy": true, 00:10:47.860 "nvme_iov_md": false 00:10:47.860 }, 00:10:47.860 "memory_domains": [ 00:10:47.860 { 00:10:47.860 "dma_device_id": "system", 00:10:47.860 "dma_device_type": 1 00:10:47.860 }, 00:10:47.860 { 00:10:47.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.860 "dma_device_type": 2 00:10:47.860 } 00:10:47.860 ], 00:10:47.860 "driver_specific": {} 00:10:47.860 } 00:10:47.860 ] 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.860 16:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.860 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.860 "name": "Existed_Raid", 00:10:47.860 "uuid": "bd4bb7b6-b8a2-4254-999c-015ef9f469cd", 00:10:47.861 "strip_size_kb": 64, 00:10:47.861 "state": "configuring", 00:10:47.861 "raid_level": "raid0", 00:10:47.861 "superblock": true, 00:10:47.861 "num_base_bdevs": 4, 00:10:47.861 "num_base_bdevs_discovered": 3, 00:10:47.861 "num_base_bdevs_operational": 4, 00:10:47.861 "base_bdevs_list": [ 00:10:47.861 { 00:10:47.861 "name": "BaseBdev1", 00:10:47.861 "uuid": "103c8851-30c3-429b-ba66-c4e13832fd41", 00:10:47.861 "is_configured": true, 00:10:47.861 "data_offset": 2048, 00:10:47.861 "data_size": 63488 00:10:47.861 }, 00:10:47.861 { 00:10:47.861 "name": "BaseBdev2", 00:10:47.861 "uuid": "2f38e6f6-333a-4410-9931-890e3b50a139", 00:10:47.861 "is_configured": true, 00:10:47.861 "data_offset": 2048, 00:10:47.861 "data_size": 63488 00:10:47.861 }, 00:10:47.861 { 00:10:47.861 "name": "BaseBdev3", 00:10:47.861 "uuid": "0c8a6e8a-8ebd-44bb-8a88-7de01aa04e39", 00:10:47.861 "is_configured": true, 00:10:47.861 "data_offset": 2048, 00:10:47.861 "data_size": 63488 00:10:47.861 }, 00:10:47.861 { 00:10:47.861 "name": "BaseBdev4", 00:10:47.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.861 "is_configured": false, 00:10:47.861 "data_offset": 0, 00:10:47.861 "data_size": 0 00:10:47.861 } 00:10:47.861 ] 00:10:47.861 }' 00:10:47.861 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.861 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.121 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:48.121 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.121 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.382 [2024-12-12 16:07:14.493771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.382 [2024-12-12 16:07:14.494104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.382 [2024-12-12 16:07:14.494123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.382 BaseBdev4 00:10:48.382 [2024-12-12 16:07:14.494449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.382 [2024-12-12 16:07:14.494627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.382 [2024-12-12 16:07:14.494640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:48.382 [2024-12-12 16:07:14.494794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.382 [ 00:10:48.382 { 00:10:48.382 "name": "BaseBdev4", 00:10:48.382 "aliases": [ 00:10:48.382 "9d8fc75c-fe15-4bd7-a0f5-62b234ac098e" 00:10:48.382 ], 00:10:48.382 "product_name": "Malloc disk", 00:10:48.382 "block_size": 512, 00:10:48.382 "num_blocks": 65536, 00:10:48.382 "uuid": "9d8fc75c-fe15-4bd7-a0f5-62b234ac098e", 00:10:48.382 "assigned_rate_limits": { 00:10:48.382 "rw_ios_per_sec": 0, 00:10:48.382 "rw_mbytes_per_sec": 0, 00:10:48.382 "r_mbytes_per_sec": 0, 00:10:48.382 "w_mbytes_per_sec": 0 00:10:48.382 }, 00:10:48.382 "claimed": true, 00:10:48.382 "claim_type": "exclusive_write", 00:10:48.382 "zoned": false, 00:10:48.382 "supported_io_types": { 00:10:48.382 "read": true, 00:10:48.382 "write": true, 00:10:48.382 "unmap": true, 00:10:48.382 "flush": true, 00:10:48.382 "reset": true, 00:10:48.382 "nvme_admin": false, 00:10:48.382 "nvme_io": false, 00:10:48.382 "nvme_io_md": false, 00:10:48.382 "write_zeroes": true, 00:10:48.382 "zcopy": true, 00:10:48.382 "get_zone_info": false, 00:10:48.382 "zone_management": false, 00:10:48.382 "zone_append": false, 00:10:48.382 "compare": false, 00:10:48.382 "compare_and_write": false, 00:10:48.382 "abort": true, 00:10:48.382 "seek_hole": false, 00:10:48.382 "seek_data": false, 00:10:48.382 "copy": true, 00:10:48.382 "nvme_iov_md": false 00:10:48.382 }, 00:10:48.382 "memory_domains": [ 00:10:48.382 { 00:10:48.382 "dma_device_id": "system", 00:10:48.382 "dma_device_type": 1 00:10:48.382 }, 00:10:48.382 { 00:10:48.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.382 "dma_device_type": 2 00:10:48.382 } 00:10:48.382 ], 00:10:48.382 "driver_specific": {} 00:10:48.382 } 00:10:48.382 ] 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.382 "name": "Existed_Raid", 00:10:48.382 "uuid": "bd4bb7b6-b8a2-4254-999c-015ef9f469cd", 00:10:48.382 "strip_size_kb": 64, 00:10:48.382 "state": "online", 00:10:48.382 "raid_level": "raid0", 00:10:48.382 "superblock": true, 00:10:48.382 "num_base_bdevs": 4, 00:10:48.382 "num_base_bdevs_discovered": 4, 00:10:48.382 "num_base_bdevs_operational": 4, 00:10:48.382 "base_bdevs_list": [ 00:10:48.382 { 00:10:48.382 "name": "BaseBdev1", 00:10:48.382 "uuid": "103c8851-30c3-429b-ba66-c4e13832fd41", 00:10:48.382 "is_configured": true, 00:10:48.382 "data_offset": 2048, 00:10:48.382 "data_size": 63488 00:10:48.382 }, 00:10:48.382 { 00:10:48.382 "name": "BaseBdev2", 00:10:48.382 "uuid": "2f38e6f6-333a-4410-9931-890e3b50a139", 00:10:48.382 "is_configured": true, 00:10:48.382 "data_offset": 2048, 00:10:48.382 "data_size": 63488 00:10:48.382 }, 00:10:48.382 { 00:10:48.382 "name": "BaseBdev3", 00:10:48.382 "uuid": "0c8a6e8a-8ebd-44bb-8a88-7de01aa04e39", 00:10:48.382 "is_configured": true, 00:10:48.382 "data_offset": 2048, 00:10:48.382 "data_size": 63488 00:10:48.382 }, 00:10:48.382 { 00:10:48.382 "name": "BaseBdev4", 00:10:48.382 "uuid": "9d8fc75c-fe15-4bd7-a0f5-62b234ac098e", 00:10:48.382 "is_configured": true, 00:10:48.382 "data_offset": 2048, 00:10:48.382 "data_size": 63488 00:10:48.382 } 00:10:48.382 ] 00:10:48.382 }' 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.382 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.642 [2024-12-12 16:07:14.969354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.642 16:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.902 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.902 "name": "Existed_Raid", 00:10:48.902 "aliases": [ 00:10:48.902 "bd4bb7b6-b8a2-4254-999c-015ef9f469cd" 00:10:48.902 ], 00:10:48.902 "product_name": "Raid Volume", 00:10:48.902 "block_size": 512, 00:10:48.902 "num_blocks": 253952, 00:10:48.902 "uuid": "bd4bb7b6-b8a2-4254-999c-015ef9f469cd", 00:10:48.902 "assigned_rate_limits": { 00:10:48.902 "rw_ios_per_sec": 0, 00:10:48.902 "rw_mbytes_per_sec": 0, 00:10:48.902 "r_mbytes_per_sec": 0, 00:10:48.903 "w_mbytes_per_sec": 0 00:10:48.903 }, 00:10:48.903 "claimed": false, 00:10:48.903 "zoned": false, 00:10:48.903 "supported_io_types": { 00:10:48.903 "read": true, 00:10:48.903 "write": true, 00:10:48.903 "unmap": true, 00:10:48.903 "flush": true, 00:10:48.903 "reset": true, 00:10:48.903 "nvme_admin": false, 00:10:48.903 "nvme_io": false, 00:10:48.903 "nvme_io_md": false, 00:10:48.903 "write_zeroes": true, 00:10:48.903 "zcopy": false, 00:10:48.903 "get_zone_info": false, 00:10:48.903 "zone_management": false, 00:10:48.903 "zone_append": false, 00:10:48.903 "compare": false, 00:10:48.903 "compare_and_write": false, 00:10:48.903 "abort": false, 00:10:48.903 "seek_hole": false, 00:10:48.903 "seek_data": false, 00:10:48.903 "copy": false, 00:10:48.903 "nvme_iov_md": false 00:10:48.903 }, 00:10:48.903 "memory_domains": [ 00:10:48.903 { 00:10:48.903 "dma_device_id": "system", 00:10:48.903 "dma_device_type": 1 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.903 "dma_device_type": 2 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "dma_device_id": "system", 00:10:48.903 "dma_device_type": 1 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.903 "dma_device_type": 2 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "dma_device_id": "system", 00:10:48.903 "dma_device_type": 1 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.903 "dma_device_type": 2 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "dma_device_id": "system", 00:10:48.903 "dma_device_type": 1 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.903 "dma_device_type": 2 00:10:48.903 } 00:10:48.903 ], 00:10:48.903 "driver_specific": { 00:10:48.903 "raid": { 00:10:48.903 "uuid": "bd4bb7b6-b8a2-4254-999c-015ef9f469cd", 00:10:48.903 "strip_size_kb": 64, 00:10:48.903 "state": "online", 00:10:48.903 "raid_level": "raid0", 00:10:48.903 "superblock": true, 00:10:48.903 "num_base_bdevs": 4, 00:10:48.903 "num_base_bdevs_discovered": 4, 00:10:48.903 "num_base_bdevs_operational": 4, 00:10:48.903 "base_bdevs_list": [ 00:10:48.903 { 00:10:48.903 "name": "BaseBdev1", 00:10:48.903 "uuid": "103c8851-30c3-429b-ba66-c4e13832fd41", 00:10:48.903 "is_configured": true, 00:10:48.903 "data_offset": 2048, 00:10:48.903 "data_size": 63488 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "name": "BaseBdev2", 00:10:48.903 "uuid": "2f38e6f6-333a-4410-9931-890e3b50a139", 00:10:48.903 "is_configured": true, 00:10:48.903 "data_offset": 2048, 00:10:48.903 "data_size": 63488 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "name": "BaseBdev3", 00:10:48.903 "uuid": "0c8a6e8a-8ebd-44bb-8a88-7de01aa04e39", 00:10:48.903 "is_configured": true, 00:10:48.903 "data_offset": 2048, 00:10:48.903 "data_size": 63488 00:10:48.903 }, 00:10:48.903 { 00:10:48.903 "name": "BaseBdev4", 00:10:48.903 "uuid": "9d8fc75c-fe15-4bd7-a0f5-62b234ac098e", 00:10:48.903 "is_configured": true, 00:10:48.903 "data_offset": 2048, 00:10:48.903 "data_size": 63488 00:10:48.903 } 00:10:48.903 ] 00:10:48.903 } 00:10:48.903 } 00:10:48.903 }' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:48.903 BaseBdev2 00:10:48.903 BaseBdev3 00:10:48.903 BaseBdev4' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.903 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.163 [2024-12-12 16:07:15.308495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.163 [2024-12-12 16:07:15.308619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.163 [2024-12-12 16:07:15.308684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.163 "name": "Existed_Raid", 00:10:49.163 "uuid": "bd4bb7b6-b8a2-4254-999c-015ef9f469cd", 00:10:49.163 "strip_size_kb": 64, 00:10:49.163 "state": "offline", 00:10:49.163 "raid_level": "raid0", 00:10:49.163 "superblock": true, 00:10:49.163 "num_base_bdevs": 4, 00:10:49.163 "num_base_bdevs_discovered": 3, 00:10:49.163 "num_base_bdevs_operational": 3, 00:10:49.163 "base_bdevs_list": [ 00:10:49.163 { 00:10:49.163 "name": null, 00:10:49.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.163 "is_configured": false, 00:10:49.163 "data_offset": 0, 00:10:49.163 "data_size": 63488 00:10:49.163 }, 00:10:49.163 { 00:10:49.163 "name": "BaseBdev2", 00:10:49.163 "uuid": "2f38e6f6-333a-4410-9931-890e3b50a139", 00:10:49.163 "is_configured": true, 00:10:49.163 "data_offset": 2048, 00:10:49.163 "data_size": 63488 00:10:49.163 }, 00:10:49.163 { 00:10:49.163 "name": "BaseBdev3", 00:10:49.163 "uuid": "0c8a6e8a-8ebd-44bb-8a88-7de01aa04e39", 00:10:49.163 "is_configured": true, 00:10:49.163 "data_offset": 2048, 00:10:49.163 "data_size": 63488 00:10:49.163 }, 00:10:49.163 { 00:10:49.163 "name": "BaseBdev4", 00:10:49.163 "uuid": "9d8fc75c-fe15-4bd7-a0f5-62b234ac098e", 00:10:49.163 "is_configured": true, 00:10:49.163 "data_offset": 2048, 00:10:49.163 "data_size": 63488 00:10:49.163 } 00:10:49.163 ] 00:10:49.163 }' 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.163 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.731 16:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.731 [2024-12-12 16:07:15.925019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.731 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.991 [2024-12-12 16:07:16.087467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.991 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.991 [2024-12-12 16:07:16.261483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:49.991 [2024-12-12 16:07:16.261566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.253 BaseBdev2 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.253 [ 00:10:50.253 { 00:10:50.253 "name": "BaseBdev2", 00:10:50.253 "aliases": [ 00:10:50.253 "2c4299ba-65b8-4dc3-800c-1be43ff29d5e" 00:10:50.253 ], 00:10:50.253 "product_name": "Malloc disk", 00:10:50.253 "block_size": 512, 00:10:50.253 "num_blocks": 65536, 00:10:50.253 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:50.253 "assigned_rate_limits": { 00:10:50.253 "rw_ios_per_sec": 0, 00:10:50.253 "rw_mbytes_per_sec": 0, 00:10:50.253 "r_mbytes_per_sec": 0, 00:10:50.253 "w_mbytes_per_sec": 0 00:10:50.253 }, 00:10:50.253 "claimed": false, 00:10:50.253 "zoned": false, 00:10:50.253 "supported_io_types": { 00:10:50.253 "read": true, 00:10:50.253 "write": true, 00:10:50.253 "unmap": true, 00:10:50.253 "flush": true, 00:10:50.253 "reset": true, 00:10:50.253 "nvme_admin": false, 00:10:50.253 "nvme_io": false, 00:10:50.253 "nvme_io_md": false, 00:10:50.253 "write_zeroes": true, 00:10:50.253 "zcopy": true, 00:10:50.253 "get_zone_info": false, 00:10:50.253 "zone_management": false, 00:10:50.253 "zone_append": false, 00:10:50.253 "compare": false, 00:10:50.253 "compare_and_write": false, 00:10:50.253 "abort": true, 00:10:50.253 "seek_hole": false, 00:10:50.253 "seek_data": false, 00:10:50.253 "copy": true, 00:10:50.253 "nvme_iov_md": false 00:10:50.253 }, 00:10:50.253 "memory_domains": [ 00:10:50.253 { 00:10:50.253 "dma_device_id": "system", 00:10:50.253 "dma_device_type": 1 00:10:50.253 }, 00:10:50.253 { 00:10:50.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.253 "dma_device_type": 2 00:10:50.253 } 00:10:50.253 ], 00:10:50.253 "driver_specific": {} 00:10:50.253 } 00:10:50.253 ] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.253 BaseBdev3 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.253 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.253 [ 00:10:50.253 { 00:10:50.254 "name": "BaseBdev3", 00:10:50.254 "aliases": [ 00:10:50.254 "ed7942b9-3d42-47f7-9377-b1751a0ebc97" 00:10:50.254 ], 00:10:50.254 "product_name": "Malloc disk", 00:10:50.254 "block_size": 512, 00:10:50.254 "num_blocks": 65536, 00:10:50.254 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:50.254 "assigned_rate_limits": { 00:10:50.254 "rw_ios_per_sec": 0, 00:10:50.254 "rw_mbytes_per_sec": 0, 00:10:50.254 "r_mbytes_per_sec": 0, 00:10:50.254 "w_mbytes_per_sec": 0 00:10:50.254 }, 00:10:50.254 "claimed": false, 00:10:50.254 "zoned": false, 00:10:50.254 "supported_io_types": { 00:10:50.254 "read": true, 00:10:50.254 "write": true, 00:10:50.254 "unmap": true, 00:10:50.254 "flush": true, 00:10:50.254 "reset": true, 00:10:50.254 "nvme_admin": false, 00:10:50.254 "nvme_io": false, 00:10:50.254 "nvme_io_md": false, 00:10:50.254 "write_zeroes": true, 00:10:50.254 "zcopy": true, 00:10:50.254 "get_zone_info": false, 00:10:50.254 "zone_management": false, 00:10:50.254 "zone_append": false, 00:10:50.254 "compare": false, 00:10:50.254 "compare_and_write": false, 00:10:50.254 "abort": true, 00:10:50.254 "seek_hole": false, 00:10:50.254 "seek_data": false, 00:10:50.254 "copy": true, 00:10:50.254 "nvme_iov_md": false 00:10:50.254 }, 00:10:50.254 "memory_domains": [ 00:10:50.254 { 00:10:50.254 "dma_device_id": "system", 00:10:50.254 "dma_device_type": 1 00:10:50.254 }, 00:10:50.254 { 00:10:50.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.254 "dma_device_type": 2 00:10:50.254 } 00:10:50.254 ], 00:10:50.254 "driver_specific": {} 00:10:50.254 } 00:10:50.254 ] 00:10:50.254 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.254 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.254 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.254 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.254 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:50.254 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.254 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.514 BaseBdev4 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.514 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.515 [ 00:10:50.515 { 00:10:50.515 "name": "BaseBdev4", 00:10:50.515 "aliases": [ 00:10:50.515 "36a7facc-dca7-44a9-8822-ed74a0cf2546" 00:10:50.515 ], 00:10:50.515 "product_name": "Malloc disk", 00:10:50.515 "block_size": 512, 00:10:50.515 "num_blocks": 65536, 00:10:50.515 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:50.515 "assigned_rate_limits": { 00:10:50.515 "rw_ios_per_sec": 0, 00:10:50.515 "rw_mbytes_per_sec": 0, 00:10:50.515 "r_mbytes_per_sec": 0, 00:10:50.515 "w_mbytes_per_sec": 0 00:10:50.515 }, 00:10:50.515 "claimed": false, 00:10:50.515 "zoned": false, 00:10:50.515 "supported_io_types": { 00:10:50.515 "read": true, 00:10:50.515 "write": true, 00:10:50.515 "unmap": true, 00:10:50.515 "flush": true, 00:10:50.515 "reset": true, 00:10:50.515 "nvme_admin": false, 00:10:50.515 "nvme_io": false, 00:10:50.515 "nvme_io_md": false, 00:10:50.515 "write_zeroes": true, 00:10:50.515 "zcopy": true, 00:10:50.515 "get_zone_info": false, 00:10:50.515 "zone_management": false, 00:10:50.515 "zone_append": false, 00:10:50.515 "compare": false, 00:10:50.515 "compare_and_write": false, 00:10:50.515 "abort": true, 00:10:50.515 "seek_hole": false, 00:10:50.515 "seek_data": false, 00:10:50.515 "copy": true, 00:10:50.515 "nvme_iov_md": false 00:10:50.515 }, 00:10:50.515 "memory_domains": [ 00:10:50.515 { 00:10:50.515 "dma_device_id": "system", 00:10:50.515 "dma_device_type": 1 00:10:50.515 }, 00:10:50.515 { 00:10:50.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.515 "dma_device_type": 2 00:10:50.515 } 00:10:50.515 ], 00:10:50.515 "driver_specific": {} 00:10:50.515 } 00:10:50.515 ] 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.515 [2024-12-12 16:07:16.696848] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.515 [2024-12-12 16:07:16.696997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.515 [2024-12-12 16:07:16.697045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.515 [2024-12-12 16:07:16.699249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.515 [2024-12-12 16:07:16.699344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.515 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.515 "name": "Existed_Raid", 00:10:50.515 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:50.515 "strip_size_kb": 64, 00:10:50.515 "state": "configuring", 00:10:50.515 "raid_level": "raid0", 00:10:50.515 "superblock": true, 00:10:50.515 "num_base_bdevs": 4, 00:10:50.515 "num_base_bdevs_discovered": 3, 00:10:50.515 "num_base_bdevs_operational": 4, 00:10:50.515 "base_bdevs_list": [ 00:10:50.515 { 00:10:50.515 "name": "BaseBdev1", 00:10:50.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.515 "is_configured": false, 00:10:50.515 "data_offset": 0, 00:10:50.515 "data_size": 0 00:10:50.515 }, 00:10:50.515 { 00:10:50.515 "name": "BaseBdev2", 00:10:50.515 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:50.515 "is_configured": true, 00:10:50.515 "data_offset": 2048, 00:10:50.515 "data_size": 63488 00:10:50.515 }, 00:10:50.515 { 00:10:50.515 "name": "BaseBdev3", 00:10:50.515 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:50.515 "is_configured": true, 00:10:50.515 "data_offset": 2048, 00:10:50.515 "data_size": 63488 00:10:50.515 }, 00:10:50.515 { 00:10:50.515 "name": "BaseBdev4", 00:10:50.515 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:50.516 "is_configured": true, 00:10:50.516 "data_offset": 2048, 00:10:50.516 "data_size": 63488 00:10:50.516 } 00:10:50.516 ] 00:10:50.516 }' 00:10:50.516 16:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.516 16:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.090 [2024-12-12 16:07:17.192112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.090 "name": "Existed_Raid", 00:10:51.090 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:51.090 "strip_size_kb": 64, 00:10:51.090 "state": "configuring", 00:10:51.090 "raid_level": "raid0", 00:10:51.090 "superblock": true, 00:10:51.090 "num_base_bdevs": 4, 00:10:51.090 "num_base_bdevs_discovered": 2, 00:10:51.090 "num_base_bdevs_operational": 4, 00:10:51.090 "base_bdevs_list": [ 00:10:51.090 { 00:10:51.090 "name": "BaseBdev1", 00:10:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.090 "is_configured": false, 00:10:51.090 "data_offset": 0, 00:10:51.090 "data_size": 0 00:10:51.090 }, 00:10:51.090 { 00:10:51.090 "name": null, 00:10:51.090 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:51.090 "is_configured": false, 00:10:51.090 "data_offset": 0, 00:10:51.090 "data_size": 63488 00:10:51.090 }, 00:10:51.090 { 00:10:51.090 "name": "BaseBdev3", 00:10:51.090 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:51.090 "is_configured": true, 00:10:51.090 "data_offset": 2048, 00:10:51.090 "data_size": 63488 00:10:51.090 }, 00:10:51.090 { 00:10:51.090 "name": "BaseBdev4", 00:10:51.090 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:51.090 "is_configured": true, 00:10:51.090 "data_offset": 2048, 00:10:51.090 "data_size": 63488 00:10:51.090 } 00:10:51.090 ] 00:10:51.090 }' 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.090 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.356 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.617 [2024-12-12 16:07:17.745065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.617 BaseBdev1 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.617 [ 00:10:51.617 { 00:10:51.617 "name": "BaseBdev1", 00:10:51.617 "aliases": [ 00:10:51.617 "98142241-20d3-4c21-8b96-20525e7c2c50" 00:10:51.617 ], 00:10:51.617 "product_name": "Malloc disk", 00:10:51.617 "block_size": 512, 00:10:51.617 "num_blocks": 65536, 00:10:51.617 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:51.617 "assigned_rate_limits": { 00:10:51.617 "rw_ios_per_sec": 0, 00:10:51.617 "rw_mbytes_per_sec": 0, 00:10:51.617 "r_mbytes_per_sec": 0, 00:10:51.617 "w_mbytes_per_sec": 0 00:10:51.617 }, 00:10:51.617 "claimed": true, 00:10:51.617 "claim_type": "exclusive_write", 00:10:51.617 "zoned": false, 00:10:51.617 "supported_io_types": { 00:10:51.617 "read": true, 00:10:51.617 "write": true, 00:10:51.617 "unmap": true, 00:10:51.617 "flush": true, 00:10:51.617 "reset": true, 00:10:51.617 "nvme_admin": false, 00:10:51.617 "nvme_io": false, 00:10:51.617 "nvme_io_md": false, 00:10:51.617 "write_zeroes": true, 00:10:51.617 "zcopy": true, 00:10:51.617 "get_zone_info": false, 00:10:51.617 "zone_management": false, 00:10:51.617 "zone_append": false, 00:10:51.617 "compare": false, 00:10:51.617 "compare_and_write": false, 00:10:51.617 "abort": true, 00:10:51.617 "seek_hole": false, 00:10:51.617 "seek_data": false, 00:10:51.617 "copy": true, 00:10:51.617 "nvme_iov_md": false 00:10:51.617 }, 00:10:51.617 "memory_domains": [ 00:10:51.617 { 00:10:51.617 "dma_device_id": "system", 00:10:51.617 "dma_device_type": 1 00:10:51.617 }, 00:10:51.617 { 00:10:51.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.617 "dma_device_type": 2 00:10:51.617 } 00:10:51.617 ], 00:10:51.617 "driver_specific": {} 00:10:51.617 } 00:10:51.617 ] 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.617 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.617 "name": "Existed_Raid", 00:10:51.617 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:51.617 "strip_size_kb": 64, 00:10:51.617 "state": "configuring", 00:10:51.617 "raid_level": "raid0", 00:10:51.618 "superblock": true, 00:10:51.618 "num_base_bdevs": 4, 00:10:51.618 "num_base_bdevs_discovered": 3, 00:10:51.618 "num_base_bdevs_operational": 4, 00:10:51.618 "base_bdevs_list": [ 00:10:51.618 { 00:10:51.618 "name": "BaseBdev1", 00:10:51.618 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:51.618 "is_configured": true, 00:10:51.618 "data_offset": 2048, 00:10:51.618 "data_size": 63488 00:10:51.618 }, 00:10:51.618 { 00:10:51.618 "name": null, 00:10:51.618 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:51.618 "is_configured": false, 00:10:51.618 "data_offset": 0, 00:10:51.618 "data_size": 63488 00:10:51.618 }, 00:10:51.618 { 00:10:51.618 "name": "BaseBdev3", 00:10:51.618 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:51.618 "is_configured": true, 00:10:51.618 "data_offset": 2048, 00:10:51.618 "data_size": 63488 00:10:51.618 }, 00:10:51.618 { 00:10:51.618 "name": "BaseBdev4", 00:10:51.618 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:51.618 "is_configured": true, 00:10:51.618 "data_offset": 2048, 00:10:51.618 "data_size": 63488 00:10:51.618 } 00:10:51.618 ] 00:10:51.618 }' 00:10:51.618 16:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.618 16:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.188 [2024-12-12 16:07:18.304371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.188 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.189 "name": "Existed_Raid", 00:10:52.189 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:52.189 "strip_size_kb": 64, 00:10:52.189 "state": "configuring", 00:10:52.189 "raid_level": "raid0", 00:10:52.189 "superblock": true, 00:10:52.189 "num_base_bdevs": 4, 00:10:52.189 "num_base_bdevs_discovered": 2, 00:10:52.189 "num_base_bdevs_operational": 4, 00:10:52.189 "base_bdevs_list": [ 00:10:52.189 { 00:10:52.189 "name": "BaseBdev1", 00:10:52.189 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:52.189 "is_configured": true, 00:10:52.189 "data_offset": 2048, 00:10:52.189 "data_size": 63488 00:10:52.189 }, 00:10:52.189 { 00:10:52.189 "name": null, 00:10:52.189 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:52.189 "is_configured": false, 00:10:52.189 "data_offset": 0, 00:10:52.189 "data_size": 63488 00:10:52.189 }, 00:10:52.189 { 00:10:52.189 "name": null, 00:10:52.189 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:52.189 "is_configured": false, 00:10:52.189 "data_offset": 0, 00:10:52.189 "data_size": 63488 00:10:52.189 }, 00:10:52.189 { 00:10:52.189 "name": "BaseBdev4", 00:10:52.189 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:52.189 "is_configured": true, 00:10:52.189 "data_offset": 2048, 00:10:52.189 "data_size": 63488 00:10:52.189 } 00:10:52.189 ] 00:10:52.189 }' 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.189 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:52.449 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.709 [2024-12-12 16:07:18.803655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.709 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.710 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.710 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.710 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.710 "name": "Existed_Raid", 00:10:52.710 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:52.710 "strip_size_kb": 64, 00:10:52.710 "state": "configuring", 00:10:52.710 "raid_level": "raid0", 00:10:52.710 "superblock": true, 00:10:52.710 "num_base_bdevs": 4, 00:10:52.710 "num_base_bdevs_discovered": 3, 00:10:52.710 "num_base_bdevs_operational": 4, 00:10:52.710 "base_bdevs_list": [ 00:10:52.710 { 00:10:52.710 "name": "BaseBdev1", 00:10:52.710 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:52.710 "is_configured": true, 00:10:52.710 "data_offset": 2048, 00:10:52.710 "data_size": 63488 00:10:52.710 }, 00:10:52.710 { 00:10:52.710 "name": null, 00:10:52.710 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:52.710 "is_configured": false, 00:10:52.710 "data_offset": 0, 00:10:52.710 "data_size": 63488 00:10:52.710 }, 00:10:52.710 { 00:10:52.710 "name": "BaseBdev3", 00:10:52.710 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:52.710 "is_configured": true, 00:10:52.710 "data_offset": 2048, 00:10:52.710 "data_size": 63488 00:10:52.710 }, 00:10:52.710 { 00:10:52.710 "name": "BaseBdev4", 00:10:52.710 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:52.710 "is_configured": true, 00:10:52.710 "data_offset": 2048, 00:10:52.710 "data_size": 63488 00:10:52.710 } 00:10:52.710 ] 00:10:52.710 }' 00:10:52.710 16:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.710 16:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.970 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.230 [2024-12-12 16:07:19.322818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.230 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.230 "name": "Existed_Raid", 00:10:53.230 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:53.230 "strip_size_kb": 64, 00:10:53.230 "state": "configuring", 00:10:53.230 "raid_level": "raid0", 00:10:53.230 "superblock": true, 00:10:53.230 "num_base_bdevs": 4, 00:10:53.230 "num_base_bdevs_discovered": 2, 00:10:53.230 "num_base_bdevs_operational": 4, 00:10:53.230 "base_bdevs_list": [ 00:10:53.230 { 00:10:53.230 "name": null, 00:10:53.230 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:53.230 "is_configured": false, 00:10:53.230 "data_offset": 0, 00:10:53.230 "data_size": 63488 00:10:53.230 }, 00:10:53.230 { 00:10:53.230 "name": null, 00:10:53.230 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:53.230 "is_configured": false, 00:10:53.230 "data_offset": 0, 00:10:53.230 "data_size": 63488 00:10:53.230 }, 00:10:53.230 { 00:10:53.230 "name": "BaseBdev3", 00:10:53.230 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:53.230 "is_configured": true, 00:10:53.230 "data_offset": 2048, 00:10:53.230 "data_size": 63488 00:10:53.230 }, 00:10:53.230 { 00:10:53.230 "name": "BaseBdev4", 00:10:53.230 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:53.230 "is_configured": true, 00:10:53.230 "data_offset": 2048, 00:10:53.231 "data_size": 63488 00:10:53.231 } 00:10:53.231 ] 00:10:53.231 }' 00:10:53.231 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.231 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 [2024-12-12 16:07:19.892604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.802 "name": "Existed_Raid", 00:10:53.802 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:53.802 "strip_size_kb": 64, 00:10:53.802 "state": "configuring", 00:10:53.802 "raid_level": "raid0", 00:10:53.802 "superblock": true, 00:10:53.802 "num_base_bdevs": 4, 00:10:53.802 "num_base_bdevs_discovered": 3, 00:10:53.802 "num_base_bdevs_operational": 4, 00:10:53.802 "base_bdevs_list": [ 00:10:53.802 { 00:10:53.802 "name": null, 00:10:53.802 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:53.802 "is_configured": false, 00:10:53.802 "data_offset": 0, 00:10:53.802 "data_size": 63488 00:10:53.802 }, 00:10:53.802 { 00:10:53.802 "name": "BaseBdev2", 00:10:53.802 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:53.802 "is_configured": true, 00:10:53.802 "data_offset": 2048, 00:10:53.802 "data_size": 63488 00:10:53.802 }, 00:10:53.802 { 00:10:53.802 "name": "BaseBdev3", 00:10:53.802 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:53.802 "is_configured": true, 00:10:53.802 "data_offset": 2048, 00:10:53.802 "data_size": 63488 00:10:53.802 }, 00:10:53.802 { 00:10:53.802 "name": "BaseBdev4", 00:10:53.802 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:53.802 "is_configured": true, 00:10:53.802 "data_offset": 2048, 00:10:53.802 "data_size": 63488 00:10:53.802 } 00:10:53.802 ] 00:10:53.802 }' 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.802 16:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 98142241-20d3-4c21-8b96-20525e7c2c50 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.062 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 [2024-12-12 16:07:20.455662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:54.322 [2024-12-12 16:07:20.456016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:54.322 [2024-12-12 16:07:20.456035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:54.322 [2024-12-12 16:07:20.456345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:54.322 [2024-12-12 16:07:20.456500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:54.322 [2024-12-12 16:07:20.456512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:54.322 NewBaseBdev 00:10:54.322 [2024-12-12 16:07:20.456649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 [ 00:10:54.322 { 00:10:54.322 "name": "NewBaseBdev", 00:10:54.322 "aliases": [ 00:10:54.322 "98142241-20d3-4c21-8b96-20525e7c2c50" 00:10:54.322 ], 00:10:54.322 "product_name": "Malloc disk", 00:10:54.322 "block_size": 512, 00:10:54.322 "num_blocks": 65536, 00:10:54.322 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:54.322 "assigned_rate_limits": { 00:10:54.322 "rw_ios_per_sec": 0, 00:10:54.322 "rw_mbytes_per_sec": 0, 00:10:54.322 "r_mbytes_per_sec": 0, 00:10:54.322 "w_mbytes_per_sec": 0 00:10:54.322 }, 00:10:54.322 "claimed": true, 00:10:54.322 "claim_type": "exclusive_write", 00:10:54.322 "zoned": false, 00:10:54.322 "supported_io_types": { 00:10:54.322 "read": true, 00:10:54.322 "write": true, 00:10:54.322 "unmap": true, 00:10:54.322 "flush": true, 00:10:54.322 "reset": true, 00:10:54.322 "nvme_admin": false, 00:10:54.322 "nvme_io": false, 00:10:54.322 "nvme_io_md": false, 00:10:54.322 "write_zeroes": true, 00:10:54.322 "zcopy": true, 00:10:54.322 "get_zone_info": false, 00:10:54.322 "zone_management": false, 00:10:54.322 "zone_append": false, 00:10:54.322 "compare": false, 00:10:54.322 "compare_and_write": false, 00:10:54.322 "abort": true, 00:10:54.322 "seek_hole": false, 00:10:54.322 "seek_data": false, 00:10:54.322 "copy": true, 00:10:54.322 "nvme_iov_md": false 00:10:54.322 }, 00:10:54.322 "memory_domains": [ 00:10:54.322 { 00:10:54.322 "dma_device_id": "system", 00:10:54.322 "dma_device_type": 1 00:10:54.322 }, 00:10:54.322 { 00:10:54.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.322 "dma_device_type": 2 00:10:54.322 } 00:10:54.322 ], 00:10:54.322 "driver_specific": {} 00:10:54.322 } 00:10:54.322 ] 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:54.322 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.323 "name": "Existed_Raid", 00:10:54.323 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:54.323 "strip_size_kb": 64, 00:10:54.323 "state": "online", 00:10:54.323 "raid_level": "raid0", 00:10:54.323 "superblock": true, 00:10:54.323 "num_base_bdevs": 4, 00:10:54.323 "num_base_bdevs_discovered": 4, 00:10:54.323 "num_base_bdevs_operational": 4, 00:10:54.323 "base_bdevs_list": [ 00:10:54.323 { 00:10:54.323 "name": "NewBaseBdev", 00:10:54.323 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:54.323 "is_configured": true, 00:10:54.323 "data_offset": 2048, 00:10:54.323 "data_size": 63488 00:10:54.323 }, 00:10:54.323 { 00:10:54.323 "name": "BaseBdev2", 00:10:54.323 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:54.323 "is_configured": true, 00:10:54.323 "data_offset": 2048, 00:10:54.323 "data_size": 63488 00:10:54.323 }, 00:10:54.323 { 00:10:54.323 "name": "BaseBdev3", 00:10:54.323 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:54.323 "is_configured": true, 00:10:54.323 "data_offset": 2048, 00:10:54.323 "data_size": 63488 00:10:54.323 }, 00:10:54.323 { 00:10:54.323 "name": "BaseBdev4", 00:10:54.323 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:54.323 "is_configured": true, 00:10:54.323 "data_offset": 2048, 00:10:54.323 "data_size": 63488 00:10:54.323 } 00:10:54.323 ] 00:10:54.323 }' 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.323 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.892 [2024-12-12 16:07:20.963284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.892 16:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.892 "name": "Existed_Raid", 00:10:54.892 "aliases": [ 00:10:54.892 "bfc5b596-07cd-4d4d-812b-ea054419b349" 00:10:54.892 ], 00:10:54.892 "product_name": "Raid Volume", 00:10:54.892 "block_size": 512, 00:10:54.892 "num_blocks": 253952, 00:10:54.893 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:54.893 "assigned_rate_limits": { 00:10:54.893 "rw_ios_per_sec": 0, 00:10:54.893 "rw_mbytes_per_sec": 0, 00:10:54.893 "r_mbytes_per_sec": 0, 00:10:54.893 "w_mbytes_per_sec": 0 00:10:54.893 }, 00:10:54.893 "claimed": false, 00:10:54.893 "zoned": false, 00:10:54.893 "supported_io_types": { 00:10:54.893 "read": true, 00:10:54.893 "write": true, 00:10:54.893 "unmap": true, 00:10:54.893 "flush": true, 00:10:54.893 "reset": true, 00:10:54.893 "nvme_admin": false, 00:10:54.893 "nvme_io": false, 00:10:54.893 "nvme_io_md": false, 00:10:54.893 "write_zeroes": true, 00:10:54.893 "zcopy": false, 00:10:54.893 "get_zone_info": false, 00:10:54.893 "zone_management": false, 00:10:54.893 "zone_append": false, 00:10:54.893 "compare": false, 00:10:54.893 "compare_and_write": false, 00:10:54.893 "abort": false, 00:10:54.893 "seek_hole": false, 00:10:54.893 "seek_data": false, 00:10:54.893 "copy": false, 00:10:54.893 "nvme_iov_md": false 00:10:54.893 }, 00:10:54.893 "memory_domains": [ 00:10:54.893 { 00:10:54.893 "dma_device_id": "system", 00:10:54.893 "dma_device_type": 1 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.893 "dma_device_type": 2 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "dma_device_id": "system", 00:10:54.893 "dma_device_type": 1 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.893 "dma_device_type": 2 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "dma_device_id": "system", 00:10:54.893 "dma_device_type": 1 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.893 "dma_device_type": 2 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "dma_device_id": "system", 00:10:54.893 "dma_device_type": 1 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.893 "dma_device_type": 2 00:10:54.893 } 00:10:54.893 ], 00:10:54.893 "driver_specific": { 00:10:54.893 "raid": { 00:10:54.893 "uuid": "bfc5b596-07cd-4d4d-812b-ea054419b349", 00:10:54.893 "strip_size_kb": 64, 00:10:54.893 "state": "online", 00:10:54.893 "raid_level": "raid0", 00:10:54.893 "superblock": true, 00:10:54.893 "num_base_bdevs": 4, 00:10:54.893 "num_base_bdevs_discovered": 4, 00:10:54.893 "num_base_bdevs_operational": 4, 00:10:54.893 "base_bdevs_list": [ 00:10:54.893 { 00:10:54.893 "name": "NewBaseBdev", 00:10:54.893 "uuid": "98142241-20d3-4c21-8b96-20525e7c2c50", 00:10:54.893 "is_configured": true, 00:10:54.893 "data_offset": 2048, 00:10:54.893 "data_size": 63488 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "name": "BaseBdev2", 00:10:54.893 "uuid": "2c4299ba-65b8-4dc3-800c-1be43ff29d5e", 00:10:54.893 "is_configured": true, 00:10:54.893 "data_offset": 2048, 00:10:54.893 "data_size": 63488 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "name": "BaseBdev3", 00:10:54.893 "uuid": "ed7942b9-3d42-47f7-9377-b1751a0ebc97", 00:10:54.893 "is_configured": true, 00:10:54.893 "data_offset": 2048, 00:10:54.893 "data_size": 63488 00:10:54.893 }, 00:10:54.893 { 00:10:54.893 "name": "BaseBdev4", 00:10:54.893 "uuid": "36a7facc-dca7-44a9-8822-ed74a0cf2546", 00:10:54.893 "is_configured": true, 00:10:54.893 "data_offset": 2048, 00:10:54.893 "data_size": 63488 00:10:54.893 } 00:10:54.893 ] 00:10:54.893 } 00:10:54.893 } 00:10:54.893 }' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:54.893 BaseBdev2 00:10:54.893 BaseBdev3 00:10:54.893 BaseBdev4' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.893 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.153 [2024-12-12 16:07:21.314290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.153 [2024-12-12 16:07:21.314421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.153 [2024-12-12 16:07:21.314549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.153 [2024-12-12 16:07:21.314659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.153 [2024-12-12 16:07:21.314706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72077 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72077 ']' 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72077 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72077 00:10:55.153 killing process with pid 72077 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72077' 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72077 00:10:55.153 [2024-12-12 16:07:21.352772] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.153 16:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72077 00:10:55.722 [2024-12-12 16:07:21.813320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.101 16:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.101 00:10:57.101 real 0m12.189s 00:10:57.101 user 0m19.015s 00:10:57.101 sys 0m2.200s 00:10:57.101 16:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.101 16:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.101 ************************************ 00:10:57.101 END TEST raid_state_function_test_sb 00:10:57.101 ************************************ 00:10:57.101 16:07:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:57.101 16:07:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.101 16:07:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.101 16:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.101 ************************************ 00:10:57.101 START TEST raid_superblock_test 00:10:57.101 ************************************ 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72749 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72749 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72749 ']' 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.101 16:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.101 [2024-12-12 16:07:23.287573] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:57.101 [2024-12-12 16:07:23.287814] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72749 ] 00:10:57.361 [2024-12-12 16:07:23.466420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.361 [2024-12-12 16:07:23.614401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.620 [2024-12-12 16:07:23.854413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.620 [2024-12-12 16:07:23.854490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.880 malloc1 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.880 [2024-12-12 16:07:24.168907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.880 [2024-12-12 16:07:24.169093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.880 [2024-12-12 16:07:24.169148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:57.880 [2024-12-12 16:07:24.169191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.880 [2024-12-12 16:07:24.172047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.880 [2024-12-12 16:07:24.172142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.880 pt1 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.880 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.881 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.881 malloc2 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 [2024-12-12 16:07:24.236193] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.141 [2024-12-12 16:07:24.236274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.141 [2024-12-12 16:07:24.236302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:58.141 [2024-12-12 16:07:24.236312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.141 [2024-12-12 16:07:24.238831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.141 [2024-12-12 16:07:24.238872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.141 pt2 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 malloc3 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 [2024-12-12 16:07:24.312385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.141 [2024-12-12 16:07:24.312531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.141 [2024-12-12 16:07:24.312584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:58.141 [2024-12-12 16:07:24.312617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.141 [2024-12-12 16:07:24.315023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.141 [2024-12-12 16:07:24.315104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.141 pt3 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 malloc4 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 [2024-12-12 16:07:24.379200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:58.141 [2024-12-12 16:07:24.379367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.141 [2024-12-12 16:07:24.379417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:58.141 [2024-12-12 16:07:24.379455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.141 [2024-12-12 16:07:24.382014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.141 [2024-12-12 16:07:24.382086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:58.141 pt4 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 [2024-12-12 16:07:24.391223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.142 [2024-12-12 16:07:24.393359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.142 [2024-12-12 16:07:24.393506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.142 [2024-12-12 16:07:24.393583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:58.142 [2024-12-12 16:07:24.393811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:58.142 [2024-12-12 16:07:24.393858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.142 [2024-12-12 16:07:24.394182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:58.142 [2024-12-12 16:07:24.394375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:58.142 [2024-12-12 16:07:24.394389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:58.142 [2024-12-12 16:07:24.394566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.142 "name": "raid_bdev1", 00:10:58.142 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:10:58.142 "strip_size_kb": 64, 00:10:58.142 "state": "online", 00:10:58.142 "raid_level": "raid0", 00:10:58.142 "superblock": true, 00:10:58.142 "num_base_bdevs": 4, 00:10:58.142 "num_base_bdevs_discovered": 4, 00:10:58.142 "num_base_bdevs_operational": 4, 00:10:58.142 "base_bdevs_list": [ 00:10:58.142 { 00:10:58.142 "name": "pt1", 00:10:58.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.142 "is_configured": true, 00:10:58.142 "data_offset": 2048, 00:10:58.142 "data_size": 63488 00:10:58.142 }, 00:10:58.142 { 00:10:58.142 "name": "pt2", 00:10:58.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.142 "is_configured": true, 00:10:58.142 "data_offset": 2048, 00:10:58.142 "data_size": 63488 00:10:58.142 }, 00:10:58.142 { 00:10:58.142 "name": "pt3", 00:10:58.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.142 "is_configured": true, 00:10:58.142 "data_offset": 2048, 00:10:58.142 "data_size": 63488 00:10:58.142 }, 00:10:58.142 { 00:10:58.142 "name": "pt4", 00:10:58.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.142 "is_configured": true, 00:10:58.142 "data_offset": 2048, 00:10:58.142 "data_size": 63488 00:10:58.142 } 00:10:58.142 ] 00:10:58.142 }' 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.142 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.711 [2024-12-12 16:07:24.842843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.711 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.711 "name": "raid_bdev1", 00:10:58.711 "aliases": [ 00:10:58.711 "564c8f1d-1c26-4a17-b8e9-f871acb789c5" 00:10:58.711 ], 00:10:58.711 "product_name": "Raid Volume", 00:10:58.711 "block_size": 512, 00:10:58.711 "num_blocks": 253952, 00:10:58.711 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:10:58.711 "assigned_rate_limits": { 00:10:58.711 "rw_ios_per_sec": 0, 00:10:58.711 "rw_mbytes_per_sec": 0, 00:10:58.711 "r_mbytes_per_sec": 0, 00:10:58.711 "w_mbytes_per_sec": 0 00:10:58.711 }, 00:10:58.711 "claimed": false, 00:10:58.711 "zoned": false, 00:10:58.711 "supported_io_types": { 00:10:58.711 "read": true, 00:10:58.711 "write": true, 00:10:58.711 "unmap": true, 00:10:58.711 "flush": true, 00:10:58.711 "reset": true, 00:10:58.711 "nvme_admin": false, 00:10:58.711 "nvme_io": false, 00:10:58.711 "nvme_io_md": false, 00:10:58.711 "write_zeroes": true, 00:10:58.711 "zcopy": false, 00:10:58.711 "get_zone_info": false, 00:10:58.711 "zone_management": false, 00:10:58.711 "zone_append": false, 00:10:58.711 "compare": false, 00:10:58.711 "compare_and_write": false, 00:10:58.711 "abort": false, 00:10:58.711 "seek_hole": false, 00:10:58.711 "seek_data": false, 00:10:58.711 "copy": false, 00:10:58.711 "nvme_iov_md": false 00:10:58.711 }, 00:10:58.711 "memory_domains": [ 00:10:58.711 { 00:10:58.711 "dma_device_id": "system", 00:10:58.712 "dma_device_type": 1 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.712 "dma_device_type": 2 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "dma_device_id": "system", 00:10:58.712 "dma_device_type": 1 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.712 "dma_device_type": 2 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "dma_device_id": "system", 00:10:58.712 "dma_device_type": 1 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.712 "dma_device_type": 2 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "dma_device_id": "system", 00:10:58.712 "dma_device_type": 1 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.712 "dma_device_type": 2 00:10:58.712 } 00:10:58.712 ], 00:10:58.712 "driver_specific": { 00:10:58.712 "raid": { 00:10:58.712 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:10:58.712 "strip_size_kb": 64, 00:10:58.712 "state": "online", 00:10:58.712 "raid_level": "raid0", 00:10:58.712 "superblock": true, 00:10:58.712 "num_base_bdevs": 4, 00:10:58.712 "num_base_bdevs_discovered": 4, 00:10:58.712 "num_base_bdevs_operational": 4, 00:10:58.712 "base_bdevs_list": [ 00:10:58.712 { 00:10:58.712 "name": "pt1", 00:10:58.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.712 "is_configured": true, 00:10:58.712 "data_offset": 2048, 00:10:58.712 "data_size": 63488 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "name": "pt2", 00:10:58.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.712 "is_configured": true, 00:10:58.712 "data_offset": 2048, 00:10:58.712 "data_size": 63488 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "name": "pt3", 00:10:58.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.712 "is_configured": true, 00:10:58.712 "data_offset": 2048, 00:10:58.712 "data_size": 63488 00:10:58.712 }, 00:10:58.712 { 00:10:58.712 "name": "pt4", 00:10:58.712 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.712 "is_configured": true, 00:10:58.712 "data_offset": 2048, 00:10:58.712 "data_size": 63488 00:10:58.712 } 00:10:58.712 ] 00:10:58.712 } 00:10:58.712 } 00:10:58.712 }' 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:58.712 pt2 00:10:58.712 pt3 00:10:58.712 pt4' 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.712 16:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.712 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:58.977 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 [2024-12-12 16:07:25.186315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=564c8f1d-1c26-4a17-b8e9-f871acb789c5 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 564c8f1d-1c26-4a17-b8e9-f871acb789c5 ']' 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 [2024-12-12 16:07:25.233870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.978 [2024-12-12 16:07:25.234016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.978 [2024-12-12 16:07:25.234169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.978 [2024-12-12 16:07:25.234295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.978 [2024-12-12 16:07:25.234348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.243 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 [2024-12-12 16:07:25.401635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:59.243 [2024-12-12 16:07:25.404030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:59.243 [2024-12-12 16:07:25.404128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:59.243 [2024-12-12 16:07:25.404185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:59.243 [2024-12-12 16:07:25.404283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:59.243 [2024-12-12 16:07:25.404391] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:59.243 [2024-12-12 16:07:25.404462] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:59.243 [2024-12-12 16:07:25.404526] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:59.243 [2024-12-12 16:07:25.404574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.243 [2024-12-12 16:07:25.404630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:59.243 request: 00:10:59.243 { 00:10:59.243 "name": "raid_bdev1", 00:10:59.244 "raid_level": "raid0", 00:10:59.244 "base_bdevs": [ 00:10:59.244 "malloc1", 00:10:59.244 "malloc2", 00:10:59.244 "malloc3", 00:10:59.244 "malloc4" 00:10:59.244 ], 00:10:59.244 "strip_size_kb": 64, 00:10:59.244 "superblock": false, 00:10:59.244 "method": "bdev_raid_create", 00:10:59.244 "req_id": 1 00:10:59.244 } 00:10:59.244 Got JSON-RPC error response 00:10:59.244 response: 00:10:59.244 { 00:10:59.244 "code": -17, 00:10:59.244 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:59.244 } 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.244 [2024-12-12 16:07:25.469383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.244 [2024-12-12 16:07:25.469492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.244 [2024-12-12 16:07:25.469526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:59.244 [2024-12-12 16:07:25.469556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.244 [2024-12-12 16:07:25.472034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.244 [2024-12-12 16:07:25.472107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.244 [2024-12-12 16:07:25.472210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:59.244 [2024-12-12 16:07:25.472285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.244 pt1 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.244 "name": "raid_bdev1", 00:10:59.244 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:10:59.244 "strip_size_kb": 64, 00:10:59.244 "state": "configuring", 00:10:59.244 "raid_level": "raid0", 00:10:59.244 "superblock": true, 00:10:59.244 "num_base_bdevs": 4, 00:10:59.244 "num_base_bdevs_discovered": 1, 00:10:59.244 "num_base_bdevs_operational": 4, 00:10:59.244 "base_bdevs_list": [ 00:10:59.244 { 00:10:59.244 "name": "pt1", 00:10:59.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.244 "is_configured": true, 00:10:59.244 "data_offset": 2048, 00:10:59.244 "data_size": 63488 00:10:59.244 }, 00:10:59.244 { 00:10:59.244 "name": null, 00:10:59.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.244 "is_configured": false, 00:10:59.244 "data_offset": 2048, 00:10:59.244 "data_size": 63488 00:10:59.244 }, 00:10:59.244 { 00:10:59.244 "name": null, 00:10:59.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.244 "is_configured": false, 00:10:59.244 "data_offset": 2048, 00:10:59.244 "data_size": 63488 00:10:59.244 }, 00:10:59.244 { 00:10:59.244 "name": null, 00:10:59.244 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.244 "is_configured": false, 00:10:59.244 "data_offset": 2048, 00:10:59.244 "data_size": 63488 00:10:59.244 } 00:10:59.244 ] 00:10:59.244 }' 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.244 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.813 [2024-12-12 16:07:25.952616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.813 [2024-12-12 16:07:25.952808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.813 [2024-12-12 16:07:25.952836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:59.813 [2024-12-12 16:07:25.952849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.813 [2024-12-12 16:07:25.953407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.813 [2024-12-12 16:07:25.953430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.813 [2024-12-12 16:07:25.953526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:59.813 [2024-12-12 16:07:25.953555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.813 pt2 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.813 [2024-12-12 16:07:25.960566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.813 16:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.813 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.813 "name": "raid_bdev1", 00:10:59.813 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:10:59.813 "strip_size_kb": 64, 00:10:59.813 "state": "configuring", 00:10:59.813 "raid_level": "raid0", 00:10:59.813 "superblock": true, 00:10:59.813 "num_base_bdevs": 4, 00:10:59.813 "num_base_bdevs_discovered": 1, 00:10:59.813 "num_base_bdevs_operational": 4, 00:10:59.813 "base_bdevs_list": [ 00:10:59.813 { 00:10:59.813 "name": "pt1", 00:10:59.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.813 "is_configured": true, 00:10:59.813 "data_offset": 2048, 00:10:59.813 "data_size": 63488 00:10:59.813 }, 00:10:59.813 { 00:10:59.813 "name": null, 00:10:59.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.813 "is_configured": false, 00:10:59.813 "data_offset": 0, 00:10:59.813 "data_size": 63488 00:10:59.813 }, 00:10:59.813 { 00:10:59.813 "name": null, 00:10:59.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.813 "is_configured": false, 00:10:59.813 "data_offset": 2048, 00:10:59.813 "data_size": 63488 00:10:59.813 }, 00:10:59.813 { 00:10:59.813 "name": null, 00:10:59.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.813 "is_configured": false, 00:10:59.813 "data_offset": 2048, 00:10:59.813 "data_size": 63488 00:10:59.813 } 00:10:59.813 ] 00:10:59.813 }' 00:10:59.813 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.813 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 [2024-12-12 16:07:26.451804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.382 [2024-12-12 16:07:26.452001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.382 [2024-12-12 16:07:26.452048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:00.382 [2024-12-12 16:07:26.452082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.382 [2024-12-12 16:07:26.452681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.382 [2024-12-12 16:07:26.452746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.382 [2024-12-12 16:07:26.452883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.382 [2024-12-12 16:07:26.452951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.382 pt2 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 [2024-12-12 16:07:26.463730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.382 [2024-12-12 16:07:26.463833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.382 [2024-12-12 16:07:26.463871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:00.382 [2024-12-12 16:07:26.463916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.382 [2024-12-12 16:07:26.464406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.382 [2024-12-12 16:07:26.464469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.382 [2024-12-12 16:07:26.464579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:00.382 [2024-12-12 16:07:26.464641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.382 pt3 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 [2024-12-12 16:07:26.475687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:00.382 [2024-12-12 16:07:26.475738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.382 [2024-12-12 16:07:26.475758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:00.382 [2024-12-12 16:07:26.475767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.382 [2024-12-12 16:07:26.476224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.382 [2024-12-12 16:07:26.476248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:00.382 [2024-12-12 16:07:26.476327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:00.382 [2024-12-12 16:07:26.476352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:00.382 [2024-12-12 16:07:26.476506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.382 [2024-12-12 16:07:26.476515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.382 [2024-12-12 16:07:26.476780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:00.382 [2024-12-12 16:07:26.476977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.382 [2024-12-12 16:07:26.477004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:00.382 [2024-12-12 16:07:26.477147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.382 pt4 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.382 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.383 "name": "raid_bdev1", 00:11:00.383 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:11:00.383 "strip_size_kb": 64, 00:11:00.383 "state": "online", 00:11:00.383 "raid_level": "raid0", 00:11:00.383 "superblock": true, 00:11:00.383 "num_base_bdevs": 4, 00:11:00.383 "num_base_bdevs_discovered": 4, 00:11:00.383 "num_base_bdevs_operational": 4, 00:11:00.383 "base_bdevs_list": [ 00:11:00.383 { 00:11:00.383 "name": "pt1", 00:11:00.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 2048, 00:11:00.383 "data_size": 63488 00:11:00.383 }, 00:11:00.383 { 00:11:00.383 "name": "pt2", 00:11:00.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 2048, 00:11:00.383 "data_size": 63488 00:11:00.383 }, 00:11:00.383 { 00:11:00.383 "name": "pt3", 00:11:00.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 2048, 00:11:00.383 "data_size": 63488 00:11:00.383 }, 00:11:00.383 { 00:11:00.383 "name": "pt4", 00:11:00.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 2048, 00:11:00.383 "data_size": 63488 00:11:00.383 } 00:11:00.383 ] 00:11:00.383 }' 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.383 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.643 [2024-12-12 16:07:26.915451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.643 "name": "raid_bdev1", 00:11:00.643 "aliases": [ 00:11:00.643 "564c8f1d-1c26-4a17-b8e9-f871acb789c5" 00:11:00.643 ], 00:11:00.643 "product_name": "Raid Volume", 00:11:00.643 "block_size": 512, 00:11:00.643 "num_blocks": 253952, 00:11:00.643 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:11:00.643 "assigned_rate_limits": { 00:11:00.643 "rw_ios_per_sec": 0, 00:11:00.643 "rw_mbytes_per_sec": 0, 00:11:00.643 "r_mbytes_per_sec": 0, 00:11:00.643 "w_mbytes_per_sec": 0 00:11:00.643 }, 00:11:00.643 "claimed": false, 00:11:00.643 "zoned": false, 00:11:00.643 "supported_io_types": { 00:11:00.643 "read": true, 00:11:00.643 "write": true, 00:11:00.643 "unmap": true, 00:11:00.643 "flush": true, 00:11:00.643 "reset": true, 00:11:00.643 "nvme_admin": false, 00:11:00.643 "nvme_io": false, 00:11:00.643 "nvme_io_md": false, 00:11:00.643 "write_zeroes": true, 00:11:00.643 "zcopy": false, 00:11:00.643 "get_zone_info": false, 00:11:00.643 "zone_management": false, 00:11:00.643 "zone_append": false, 00:11:00.643 "compare": false, 00:11:00.643 "compare_and_write": false, 00:11:00.643 "abort": false, 00:11:00.643 "seek_hole": false, 00:11:00.643 "seek_data": false, 00:11:00.643 "copy": false, 00:11:00.643 "nvme_iov_md": false 00:11:00.643 }, 00:11:00.643 "memory_domains": [ 00:11:00.643 { 00:11:00.643 "dma_device_id": "system", 00:11:00.643 "dma_device_type": 1 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.643 "dma_device_type": 2 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "dma_device_id": "system", 00:11:00.643 "dma_device_type": 1 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.643 "dma_device_type": 2 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "dma_device_id": "system", 00:11:00.643 "dma_device_type": 1 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.643 "dma_device_type": 2 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "dma_device_id": "system", 00:11:00.643 "dma_device_type": 1 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.643 "dma_device_type": 2 00:11:00.643 } 00:11:00.643 ], 00:11:00.643 "driver_specific": { 00:11:00.643 "raid": { 00:11:00.643 "uuid": "564c8f1d-1c26-4a17-b8e9-f871acb789c5", 00:11:00.643 "strip_size_kb": 64, 00:11:00.643 "state": "online", 00:11:00.643 "raid_level": "raid0", 00:11:00.643 "superblock": true, 00:11:00.643 "num_base_bdevs": 4, 00:11:00.643 "num_base_bdevs_discovered": 4, 00:11:00.643 "num_base_bdevs_operational": 4, 00:11:00.643 "base_bdevs_list": [ 00:11:00.643 { 00:11:00.643 "name": "pt1", 00:11:00.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.643 "is_configured": true, 00:11:00.643 "data_offset": 2048, 00:11:00.643 "data_size": 63488 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "name": "pt2", 00:11:00.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.643 "is_configured": true, 00:11:00.643 "data_offset": 2048, 00:11:00.643 "data_size": 63488 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "name": "pt3", 00:11:00.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.643 "is_configured": true, 00:11:00.643 "data_offset": 2048, 00:11:00.643 "data_size": 63488 00:11:00.643 }, 00:11:00.643 { 00:11:00.643 "name": "pt4", 00:11:00.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.643 "is_configured": true, 00:11:00.643 "data_offset": 2048, 00:11:00.643 "data_size": 63488 00:11:00.643 } 00:11:00.643 ] 00:11:00.643 } 00:11:00.643 } 00:11:00.643 }' 00:11:00.643 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.902 16:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.902 pt2 00:11:00.902 pt3 00:11:00.902 pt4' 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:00.902 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:00.903 [2024-12-12 16:07:27.222865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.903 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 564c8f1d-1c26-4a17-b8e9-f871acb789c5 '!=' 564c8f1d-1c26-4a17-b8e9-f871acb789c5 ']' 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72749 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72749 ']' 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72749 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72749 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72749' 00:11:01.162 killing process with pid 72749 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72749 00:11:01.162 [2024-12-12 16:07:27.310442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.162 16:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72749 00:11:01.162 [2024-12-12 16:07:27.310681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.162 [2024-12-12 16:07:27.310778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.162 [2024-12-12 16:07:27.310790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:01.732 [2024-12-12 16:07:27.826512] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.109 16:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:03.109 00:11:03.109 real 0m6.065s 00:11:03.109 user 0m8.392s 00:11:03.109 sys 0m1.091s 00:11:03.109 16:07:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.109 16:07:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.109 ************************************ 00:11:03.109 END TEST raid_superblock_test 00:11:03.109 ************************************ 00:11:03.109 16:07:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:03.109 16:07:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.109 16:07:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.109 16:07:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.109 ************************************ 00:11:03.109 START TEST raid_read_error_test 00:11:03.109 ************************************ 00:11:03.109 16:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:03.109 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:03.109 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:03.109 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:03.109 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.109 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.109 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5rsDUWKoQv 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73021 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73021 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73021 ']' 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.110 16:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.110 [2024-12-12 16:07:29.435212] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:03.110 [2024-12-12 16:07:29.435415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73021 ] 00:11:03.369 [2024-12-12 16:07:29.595914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.629 [2024-12-12 16:07:29.752524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.888 [2024-12-12 16:07:30.038707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.888 [2024-12-12 16:07:30.038760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.148 BaseBdev1_malloc 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.148 true 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.148 [2024-12-12 16:07:30.391046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.148 [2024-12-12 16:07:30.391125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.148 [2024-12-12 16:07:30.391149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:04.148 [2024-12-12 16:07:30.391163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.148 [2024-12-12 16:07:30.393968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.148 [2024-12-12 16:07:30.394112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.148 BaseBdev1 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.148 BaseBdev2_malloc 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.148 true 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.148 [2024-12-12 16:07:30.473420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:04.148 [2024-12-12 16:07:30.473497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.148 [2024-12-12 16:07:30.473518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:04.148 [2024-12-12 16:07:30.473532] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.148 [2024-12-12 16:07:30.476353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.148 [2024-12-12 16:07:30.476480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.148 BaseBdev2 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.148 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 BaseBdev3_malloc 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 true 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 [2024-12-12 16:07:30.559606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:04.408 [2024-12-12 16:07:30.559671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.408 [2024-12-12 16:07:30.559690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:04.408 [2024-12-12 16:07:30.559703] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.408 [2024-12-12 16:07:30.562375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.408 [2024-12-12 16:07:30.562416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:04.408 BaseBdev3 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 BaseBdev4_malloc 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 true 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 [2024-12-12 16:07:30.640176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:04.408 [2024-12-12 16:07:30.640319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.408 [2024-12-12 16:07:30.640343] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:04.408 [2024-12-12 16:07:30.640356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.408 [2024-12-12 16:07:30.643029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.408 [2024-12-12 16:07:30.643071] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:04.408 BaseBdev4 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 [2024-12-12 16:07:30.652229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.408 [2024-12-12 16:07:30.654656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.408 [2024-12-12 16:07:30.654746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.408 [2024-12-12 16:07:30.654819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.408 [2024-12-12 16:07:30.655104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:04.408 [2024-12-12 16:07:30.655125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:04.408 [2024-12-12 16:07:30.655406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:04.408 [2024-12-12 16:07:30.655607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:04.408 [2024-12-12 16:07:30.655621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:04.408 [2024-12-12 16:07:30.655813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.408 "name": "raid_bdev1", 00:11:04.408 "uuid": "58de4780-4f2e-42d4-9659-5a2307e12d08", 00:11:04.408 "strip_size_kb": 64, 00:11:04.408 "state": "online", 00:11:04.408 "raid_level": "raid0", 00:11:04.408 "superblock": true, 00:11:04.408 "num_base_bdevs": 4, 00:11:04.408 "num_base_bdevs_discovered": 4, 00:11:04.408 "num_base_bdevs_operational": 4, 00:11:04.408 "base_bdevs_list": [ 00:11:04.408 { 00:11:04.408 "name": "BaseBdev1", 00:11:04.408 "uuid": "e0c658a1-b1a4-54be-a6c0-f8b617d1b83b", 00:11:04.408 "is_configured": true, 00:11:04.408 "data_offset": 2048, 00:11:04.408 "data_size": 63488 00:11:04.408 }, 00:11:04.408 { 00:11:04.408 "name": "BaseBdev2", 00:11:04.408 "uuid": "e6200ac3-d1b2-553b-8ca9-d38dd2c1153d", 00:11:04.408 "is_configured": true, 00:11:04.408 "data_offset": 2048, 00:11:04.408 "data_size": 63488 00:11:04.408 }, 00:11:04.408 { 00:11:04.408 "name": "BaseBdev3", 00:11:04.408 "uuid": "0137c4cb-b91d-5d52-a5c4-5e34c7527136", 00:11:04.408 "is_configured": true, 00:11:04.408 "data_offset": 2048, 00:11:04.408 "data_size": 63488 00:11:04.408 }, 00:11:04.408 { 00:11:04.408 "name": "BaseBdev4", 00:11:04.408 "uuid": "89e90373-4b89-5fd9-9be2-9f0e97722154", 00:11:04.408 "is_configured": true, 00:11:04.408 "data_offset": 2048, 00:11:04.408 "data_size": 63488 00:11:04.408 } 00:11:04.408 ] 00:11:04.408 }' 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.408 16:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.976 16:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:04.976 16:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:04.976 [2024-12-12 16:07:31.224878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.911 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.911 "name": "raid_bdev1", 00:11:05.911 "uuid": "58de4780-4f2e-42d4-9659-5a2307e12d08", 00:11:05.911 "strip_size_kb": 64, 00:11:05.911 "state": "online", 00:11:05.911 "raid_level": "raid0", 00:11:05.911 "superblock": true, 00:11:05.911 "num_base_bdevs": 4, 00:11:05.911 "num_base_bdevs_discovered": 4, 00:11:05.911 "num_base_bdevs_operational": 4, 00:11:05.911 "base_bdevs_list": [ 00:11:05.911 { 00:11:05.911 "name": "BaseBdev1", 00:11:05.911 "uuid": "e0c658a1-b1a4-54be-a6c0-f8b617d1b83b", 00:11:05.911 "is_configured": true, 00:11:05.911 "data_offset": 2048, 00:11:05.911 "data_size": 63488 00:11:05.911 }, 00:11:05.911 { 00:11:05.911 "name": "BaseBdev2", 00:11:05.911 "uuid": "e6200ac3-d1b2-553b-8ca9-d38dd2c1153d", 00:11:05.911 "is_configured": true, 00:11:05.911 "data_offset": 2048, 00:11:05.911 "data_size": 63488 00:11:05.911 }, 00:11:05.911 { 00:11:05.911 "name": "BaseBdev3", 00:11:05.911 "uuid": "0137c4cb-b91d-5d52-a5c4-5e34c7527136", 00:11:05.911 "is_configured": true, 00:11:05.911 "data_offset": 2048, 00:11:05.911 "data_size": 63488 00:11:05.911 }, 00:11:05.911 { 00:11:05.911 "name": "BaseBdev4", 00:11:05.911 "uuid": "89e90373-4b89-5fd9-9be2-9f0e97722154", 00:11:05.911 "is_configured": true, 00:11:05.911 "data_offset": 2048, 00:11:05.911 "data_size": 63488 00:11:05.912 } 00:11:05.912 ] 00:11:05.912 }' 00:11:05.912 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.912 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.480 [2024-12-12 16:07:32.591706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.480 [2024-12-12 16:07:32.591765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.480 [2024-12-12 16:07:32.595140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.480 [2024-12-12 16:07:32.595251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.480 [2024-12-12 16:07:32.595336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.480 [2024-12-12 16:07:32.595390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:06.480 { 00:11:06.480 "results": [ 00:11:06.480 { 00:11:06.480 "job": "raid_bdev1", 00:11:06.480 "core_mask": "0x1", 00:11:06.480 "workload": "randrw", 00:11:06.480 "percentage": 50, 00:11:06.480 "status": "finished", 00:11:06.480 "queue_depth": 1, 00:11:06.480 "io_size": 131072, 00:11:06.480 "runtime": 1.367102, 00:11:06.480 "iops": 11669.941233353473, 00:11:06.480 "mibps": 1458.742654169184, 00:11:06.480 "io_failed": 1, 00:11:06.480 "io_timeout": 0, 00:11:06.480 "avg_latency_us": 119.96612787876384, 00:11:06.480 "min_latency_us": 32.866375545851525, 00:11:06.480 "max_latency_us": 1681.3275109170306 00:11:06.480 } 00:11:06.480 ], 00:11:06.480 "core_count": 1 00:11:06.480 } 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73021 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73021 ']' 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73021 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73021 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73021' 00:11:06.480 killing process with pid 73021 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73021 00:11:06.480 [2024-12-12 16:07:32.638695] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.480 16:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73021 00:11:06.740 [2024-12-12 16:07:33.060686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5rsDUWKoQv 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.663 ************************************ 00:11:08.663 END TEST raid_read_error_test 00:11:08.663 ************************************ 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:08.663 00:11:08.663 real 0m5.193s 00:11:08.663 user 0m5.959s 00:11:08.663 sys 0m0.721s 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.663 16:07:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.663 16:07:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:08.663 16:07:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:08.663 16:07:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.663 16:07:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.663 ************************************ 00:11:08.663 START TEST raid_write_error_test 00:11:08.663 ************************************ 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7GWiHZ7RS7 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73179 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73179 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73179 ']' 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.663 16:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.663 [2024-12-12 16:07:34.694308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:08.663 [2024-12-12 16:07:34.694507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73179 ] 00:11:08.663 [2024-12-12 16:07:34.870009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.922 [2024-12-12 16:07:35.035163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.922 [2024-12-12 16:07:35.234147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.922 [2024-12-12 16:07:35.234215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.180 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.180 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.180 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.180 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.180 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.180 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 BaseBdev1_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 true 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 [2024-12-12 16:07:35.585923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.440 [2024-12-12 16:07:35.585979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.440 [2024-12-12 16:07:35.586000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:09.440 [2024-12-12 16:07:35.586012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.440 [2024-12-12 16:07:35.588296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.440 [2024-12-12 16:07:35.588338] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.440 BaseBdev1 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 BaseBdev2_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 true 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 [2024-12-12 16:07:35.652305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:09.440 [2024-12-12 16:07:35.652361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.440 [2024-12-12 16:07:35.652377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:09.440 [2024-12-12 16:07:35.652387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.440 [2024-12-12 16:07:35.654459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.440 [2024-12-12 16:07:35.654499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.440 BaseBdev2 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 BaseBdev3_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 true 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 [2024-12-12 16:07:35.733303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:09.440 [2024-12-12 16:07:35.733437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.440 [2024-12-12 16:07:35.733462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:09.440 [2024-12-12 16:07:35.733472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.440 [2024-12-12 16:07:35.735663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.440 [2024-12-12 16:07:35.735702] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.440 BaseBdev3 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 BaseBdev4_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 true 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 [2024-12-12 16:07:35.800245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:09.700 [2024-12-12 16:07:35.800301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.700 [2024-12-12 16:07:35.800319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.700 [2024-12-12 16:07:35.800329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.700 [2024-12-12 16:07:35.802411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.700 [2024-12-12 16:07:35.802453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:09.700 BaseBdev4 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 [2024-12-12 16:07:35.812313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.700 [2024-12-12 16:07:35.814218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.700 [2024-12-12 16:07:35.814292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.700 [2024-12-12 16:07:35.814355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.700 [2024-12-12 16:07:35.814574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:09.700 [2024-12-12 16:07:35.814591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.700 [2024-12-12 16:07:35.814844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:09.700 [2024-12-12 16:07:35.815020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:09.700 [2024-12-12 16:07:35.815032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:09.700 [2024-12-12 16:07:35.815198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.700 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.700 "name": "raid_bdev1", 00:11:09.700 "uuid": "da10beee-c540-4a15-97c2-91de0f6d835d", 00:11:09.700 "strip_size_kb": 64, 00:11:09.700 "state": "online", 00:11:09.700 "raid_level": "raid0", 00:11:09.700 "superblock": true, 00:11:09.700 "num_base_bdevs": 4, 00:11:09.700 "num_base_bdevs_discovered": 4, 00:11:09.700 "num_base_bdevs_operational": 4, 00:11:09.700 "base_bdevs_list": [ 00:11:09.700 { 00:11:09.700 "name": "BaseBdev1", 00:11:09.701 "uuid": "9f2bace4-7cac-5be5-b6c2-ea5c35c0d117", 00:11:09.701 "is_configured": true, 00:11:09.701 "data_offset": 2048, 00:11:09.701 "data_size": 63488 00:11:09.701 }, 00:11:09.701 { 00:11:09.701 "name": "BaseBdev2", 00:11:09.701 "uuid": "a1f537ac-e4a7-55f0-819d-b068cb8a3a25", 00:11:09.701 "is_configured": true, 00:11:09.701 "data_offset": 2048, 00:11:09.701 "data_size": 63488 00:11:09.701 }, 00:11:09.701 { 00:11:09.701 "name": "BaseBdev3", 00:11:09.701 "uuid": "afb9ee77-891a-5b6f-bc5f-acc20b61c94f", 00:11:09.701 "is_configured": true, 00:11:09.701 "data_offset": 2048, 00:11:09.701 "data_size": 63488 00:11:09.701 }, 00:11:09.701 { 00:11:09.701 "name": "BaseBdev4", 00:11:09.701 "uuid": "b867c430-02cc-5833-a383-01bd0da13acf", 00:11:09.701 "is_configured": true, 00:11:09.701 "data_offset": 2048, 00:11:09.701 "data_size": 63488 00:11:09.701 } 00:11:09.701 ] 00:11:09.701 }' 00:11:09.701 16:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.701 16:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.959 16:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:09.959 16:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.219 [2024-12-12 16:07:36.352939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.156 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.157 "name": "raid_bdev1", 00:11:11.157 "uuid": "da10beee-c540-4a15-97c2-91de0f6d835d", 00:11:11.157 "strip_size_kb": 64, 00:11:11.157 "state": "online", 00:11:11.157 "raid_level": "raid0", 00:11:11.157 "superblock": true, 00:11:11.157 "num_base_bdevs": 4, 00:11:11.157 "num_base_bdevs_discovered": 4, 00:11:11.157 "num_base_bdevs_operational": 4, 00:11:11.157 "base_bdevs_list": [ 00:11:11.157 { 00:11:11.157 "name": "BaseBdev1", 00:11:11.157 "uuid": "9f2bace4-7cac-5be5-b6c2-ea5c35c0d117", 00:11:11.157 "is_configured": true, 00:11:11.157 "data_offset": 2048, 00:11:11.157 "data_size": 63488 00:11:11.157 }, 00:11:11.157 { 00:11:11.157 "name": "BaseBdev2", 00:11:11.157 "uuid": "a1f537ac-e4a7-55f0-819d-b068cb8a3a25", 00:11:11.157 "is_configured": true, 00:11:11.157 "data_offset": 2048, 00:11:11.157 "data_size": 63488 00:11:11.157 }, 00:11:11.157 { 00:11:11.157 "name": "BaseBdev3", 00:11:11.157 "uuid": "afb9ee77-891a-5b6f-bc5f-acc20b61c94f", 00:11:11.157 "is_configured": true, 00:11:11.157 "data_offset": 2048, 00:11:11.157 "data_size": 63488 00:11:11.157 }, 00:11:11.157 { 00:11:11.157 "name": "BaseBdev4", 00:11:11.157 "uuid": "b867c430-02cc-5833-a383-01bd0da13acf", 00:11:11.157 "is_configured": true, 00:11:11.157 "data_offset": 2048, 00:11:11.157 "data_size": 63488 00:11:11.157 } 00:11:11.157 ] 00:11:11.157 }' 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.157 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.416 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.416 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.416 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.416 [2024-12-12 16:07:37.680942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.416 [2024-12-12 16:07:37.680979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.416 [2024-12-12 16:07:37.683527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.416 [2024-12-12 16:07:37.683595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.416 [2024-12-12 16:07:37.683640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.416 [2024-12-12 16:07:37.683653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:11.416 { 00:11:11.416 "results": [ 00:11:11.416 { 00:11:11.416 "job": "raid_bdev1", 00:11:11.416 "core_mask": "0x1", 00:11:11.416 "workload": "randrw", 00:11:11.417 "percentage": 50, 00:11:11.417 "status": "finished", 00:11:11.417 "queue_depth": 1, 00:11:11.417 "io_size": 131072, 00:11:11.417 "runtime": 1.328793, 00:11:11.417 "iops": 14689.27063884292, 00:11:11.417 "mibps": 1836.158829855365, 00:11:11.417 "io_failed": 1, 00:11:11.417 "io_timeout": 0, 00:11:11.417 "avg_latency_us": 94.5495482854893, 00:11:11.417 "min_latency_us": 26.606113537117903, 00:11:11.417 "max_latency_us": 1445.2262008733624 00:11:11.417 } 00:11:11.417 ], 00:11:11.417 "core_count": 1 00:11:11.417 } 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73179 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73179 ']' 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73179 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73179 00:11:11.417 killing process with pid 73179 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73179' 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73179 00:11:11.417 16:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73179 00:11:11.417 [2024-12-12 16:07:37.729398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.983 [2024-12-12 16:07:38.055728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7GWiHZ7RS7 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:13.363 16:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:13.363 00:11:13.363 real 0m4.721s 00:11:13.364 user 0m5.529s 00:11:13.364 sys 0m0.578s 00:11:13.364 16:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.364 16:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.364 ************************************ 00:11:13.364 END TEST raid_write_error_test 00:11:13.364 ************************************ 00:11:13.364 16:07:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:13.364 16:07:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:13.364 16:07:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.364 16:07:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.364 16:07:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.364 ************************************ 00:11:13.364 START TEST raid_state_function_test 00:11:13.364 ************************************ 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73317 00:11:13.364 Process raid pid: 73317 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73317' 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73317 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73317 ']' 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.364 16:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.364 [2024-12-12 16:07:39.459876] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:13.364 [2024-12-12 16:07:39.460026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.364 [2024-12-12 16:07:39.642554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.622 [2024-12-12 16:07:39.779612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.881 [2024-12-12 16:07:40.021600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.881 [2024-12-12 16:07:40.021644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.140 [2024-12-12 16:07:40.399606] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.140 [2024-12-12 16:07:40.399684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.140 [2024-12-12 16:07:40.399697] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.140 [2024-12-12 16:07:40.399708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.140 [2024-12-12 16:07:40.399717] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.140 [2024-12-12 16:07:40.399727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.140 [2024-12-12 16:07:40.399735] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.140 [2024-12-12 16:07:40.399745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.140 "name": "Existed_Raid", 00:11:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.140 "strip_size_kb": 64, 00:11:14.140 "state": "configuring", 00:11:14.140 "raid_level": "concat", 00:11:14.140 "superblock": false, 00:11:14.140 "num_base_bdevs": 4, 00:11:14.140 "num_base_bdevs_discovered": 0, 00:11:14.140 "num_base_bdevs_operational": 4, 00:11:14.140 "base_bdevs_list": [ 00:11:14.140 { 00:11:14.140 "name": "BaseBdev1", 00:11:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.140 "is_configured": false, 00:11:14.140 "data_offset": 0, 00:11:14.140 "data_size": 0 00:11:14.140 }, 00:11:14.140 { 00:11:14.140 "name": "BaseBdev2", 00:11:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.140 "is_configured": false, 00:11:14.140 "data_offset": 0, 00:11:14.140 "data_size": 0 00:11:14.140 }, 00:11:14.140 { 00:11:14.140 "name": "BaseBdev3", 00:11:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.140 "is_configured": false, 00:11:14.140 "data_offset": 0, 00:11:14.140 "data_size": 0 00:11:14.140 }, 00:11:14.140 { 00:11:14.140 "name": "BaseBdev4", 00:11:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.140 "is_configured": false, 00:11:14.140 "data_offset": 0, 00:11:14.140 "data_size": 0 00:11:14.140 } 00:11:14.140 ] 00:11:14.140 }' 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.140 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.708 [2024-12-12 16:07:40.786906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.708 [2024-12-12 16:07:40.786955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.708 [2024-12-12 16:07:40.798870] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.708 [2024-12-12 16:07:40.798935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.708 [2024-12-12 16:07:40.798946] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.708 [2024-12-12 16:07:40.798957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.708 [2024-12-12 16:07:40.798965] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.708 [2024-12-12 16:07:40.798976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.708 [2024-12-12 16:07:40.798984] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.708 [2024-12-12 16:07:40.798995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.708 [2024-12-12 16:07:40.854504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.708 BaseBdev1 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.708 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.708 [ 00:11:14.708 { 00:11:14.708 "name": "BaseBdev1", 00:11:14.708 "aliases": [ 00:11:14.708 "66ad3272-bab0-492c-9609-846b7b232801" 00:11:14.708 ], 00:11:14.708 "product_name": "Malloc disk", 00:11:14.708 "block_size": 512, 00:11:14.708 "num_blocks": 65536, 00:11:14.708 "uuid": "66ad3272-bab0-492c-9609-846b7b232801", 00:11:14.708 "assigned_rate_limits": { 00:11:14.708 "rw_ios_per_sec": 0, 00:11:14.708 "rw_mbytes_per_sec": 0, 00:11:14.708 "r_mbytes_per_sec": 0, 00:11:14.708 "w_mbytes_per_sec": 0 00:11:14.708 }, 00:11:14.708 "claimed": true, 00:11:14.708 "claim_type": "exclusive_write", 00:11:14.708 "zoned": false, 00:11:14.708 "supported_io_types": { 00:11:14.708 "read": true, 00:11:14.708 "write": true, 00:11:14.708 "unmap": true, 00:11:14.709 "flush": true, 00:11:14.709 "reset": true, 00:11:14.709 "nvme_admin": false, 00:11:14.709 "nvme_io": false, 00:11:14.709 "nvme_io_md": false, 00:11:14.709 "write_zeroes": true, 00:11:14.709 "zcopy": true, 00:11:14.709 "get_zone_info": false, 00:11:14.709 "zone_management": false, 00:11:14.709 "zone_append": false, 00:11:14.709 "compare": false, 00:11:14.709 "compare_and_write": false, 00:11:14.709 "abort": true, 00:11:14.709 "seek_hole": false, 00:11:14.709 "seek_data": false, 00:11:14.709 "copy": true, 00:11:14.709 "nvme_iov_md": false 00:11:14.709 }, 00:11:14.709 "memory_domains": [ 00:11:14.709 { 00:11:14.709 "dma_device_id": "system", 00:11:14.709 "dma_device_type": 1 00:11:14.709 }, 00:11:14.709 { 00:11:14.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.709 "dma_device_type": 2 00:11:14.709 } 00:11:14.709 ], 00:11:14.709 "driver_specific": {} 00:11:14.709 } 00:11:14.709 ] 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.709 "name": "Existed_Raid", 00:11:14.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.709 "strip_size_kb": 64, 00:11:14.709 "state": "configuring", 00:11:14.709 "raid_level": "concat", 00:11:14.709 "superblock": false, 00:11:14.709 "num_base_bdevs": 4, 00:11:14.709 "num_base_bdevs_discovered": 1, 00:11:14.709 "num_base_bdevs_operational": 4, 00:11:14.709 "base_bdevs_list": [ 00:11:14.709 { 00:11:14.709 "name": "BaseBdev1", 00:11:14.709 "uuid": "66ad3272-bab0-492c-9609-846b7b232801", 00:11:14.709 "is_configured": true, 00:11:14.709 "data_offset": 0, 00:11:14.709 "data_size": 65536 00:11:14.709 }, 00:11:14.709 { 00:11:14.709 "name": "BaseBdev2", 00:11:14.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.709 "is_configured": false, 00:11:14.709 "data_offset": 0, 00:11:14.709 "data_size": 0 00:11:14.709 }, 00:11:14.709 { 00:11:14.709 "name": "BaseBdev3", 00:11:14.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.709 "is_configured": false, 00:11:14.709 "data_offset": 0, 00:11:14.709 "data_size": 0 00:11:14.709 }, 00:11:14.709 { 00:11:14.709 "name": "BaseBdev4", 00:11:14.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.709 "is_configured": false, 00:11:14.709 "data_offset": 0, 00:11:14.709 "data_size": 0 00:11:14.709 } 00:11:14.709 ] 00:11:14.709 }' 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.709 16:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.278 [2024-12-12 16:07:41.361731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.278 [2024-12-12 16:07:41.361801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.278 [2024-12-12 16:07:41.373768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.278 [2024-12-12 16:07:41.375921] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.278 [2024-12-12 16:07:41.375970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.278 [2024-12-12 16:07:41.375982] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.278 [2024-12-12 16:07:41.375994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.278 [2024-12-12 16:07:41.376003] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.278 [2024-12-12 16:07:41.376013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.278 "name": "Existed_Raid", 00:11:15.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.278 "strip_size_kb": 64, 00:11:15.278 "state": "configuring", 00:11:15.278 "raid_level": "concat", 00:11:15.278 "superblock": false, 00:11:15.278 "num_base_bdevs": 4, 00:11:15.278 "num_base_bdevs_discovered": 1, 00:11:15.278 "num_base_bdevs_operational": 4, 00:11:15.278 "base_bdevs_list": [ 00:11:15.278 { 00:11:15.278 "name": "BaseBdev1", 00:11:15.278 "uuid": "66ad3272-bab0-492c-9609-846b7b232801", 00:11:15.278 "is_configured": true, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 65536 00:11:15.278 }, 00:11:15.278 { 00:11:15.278 "name": "BaseBdev2", 00:11:15.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.278 "is_configured": false, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 0 00:11:15.278 }, 00:11:15.278 { 00:11:15.278 "name": "BaseBdev3", 00:11:15.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.278 "is_configured": false, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 0 00:11:15.278 }, 00:11:15.278 { 00:11:15.278 "name": "BaseBdev4", 00:11:15.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.278 "is_configured": false, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 0 00:11:15.278 } 00:11:15.278 ] 00:11:15.278 }' 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.278 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.538 [2024-12-12 16:07:41.877697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.538 BaseBdev2 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.538 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.796 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.796 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.796 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 [ 00:11:15.796 { 00:11:15.796 "name": "BaseBdev2", 00:11:15.796 "aliases": [ 00:11:15.796 "ce694f24-a77c-486d-9968-c1250d5d7230" 00:11:15.796 ], 00:11:15.796 "product_name": "Malloc disk", 00:11:15.796 "block_size": 512, 00:11:15.796 "num_blocks": 65536, 00:11:15.796 "uuid": "ce694f24-a77c-486d-9968-c1250d5d7230", 00:11:15.796 "assigned_rate_limits": { 00:11:15.796 "rw_ios_per_sec": 0, 00:11:15.796 "rw_mbytes_per_sec": 0, 00:11:15.796 "r_mbytes_per_sec": 0, 00:11:15.796 "w_mbytes_per_sec": 0 00:11:15.796 }, 00:11:15.796 "claimed": true, 00:11:15.796 "claim_type": "exclusive_write", 00:11:15.796 "zoned": false, 00:11:15.796 "supported_io_types": { 00:11:15.796 "read": true, 00:11:15.796 "write": true, 00:11:15.796 "unmap": true, 00:11:15.796 "flush": true, 00:11:15.796 "reset": true, 00:11:15.796 "nvme_admin": false, 00:11:15.796 "nvme_io": false, 00:11:15.796 "nvme_io_md": false, 00:11:15.796 "write_zeroes": true, 00:11:15.796 "zcopy": true, 00:11:15.796 "get_zone_info": false, 00:11:15.796 "zone_management": false, 00:11:15.796 "zone_append": false, 00:11:15.796 "compare": false, 00:11:15.797 "compare_and_write": false, 00:11:15.797 "abort": true, 00:11:15.797 "seek_hole": false, 00:11:15.797 "seek_data": false, 00:11:15.797 "copy": true, 00:11:15.797 "nvme_iov_md": false 00:11:15.797 }, 00:11:15.797 "memory_domains": [ 00:11:15.797 { 00:11:15.797 "dma_device_id": "system", 00:11:15.797 "dma_device_type": 1 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.797 "dma_device_type": 2 00:11:15.797 } 00:11:15.797 ], 00:11:15.797 "driver_specific": {} 00:11:15.797 } 00:11:15.797 ] 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.797 "name": "Existed_Raid", 00:11:15.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.797 "strip_size_kb": 64, 00:11:15.797 "state": "configuring", 00:11:15.797 "raid_level": "concat", 00:11:15.797 "superblock": false, 00:11:15.797 "num_base_bdevs": 4, 00:11:15.797 "num_base_bdevs_discovered": 2, 00:11:15.797 "num_base_bdevs_operational": 4, 00:11:15.797 "base_bdevs_list": [ 00:11:15.797 { 00:11:15.797 "name": "BaseBdev1", 00:11:15.797 "uuid": "66ad3272-bab0-492c-9609-846b7b232801", 00:11:15.797 "is_configured": true, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 65536 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "name": "BaseBdev2", 00:11:15.797 "uuid": "ce694f24-a77c-486d-9968-c1250d5d7230", 00:11:15.797 "is_configured": true, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 65536 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "name": "BaseBdev3", 00:11:15.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.797 "is_configured": false, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 0 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "name": "BaseBdev4", 00:11:15.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.797 "is_configured": false, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 0 00:11:15.797 } 00:11:15.797 ] 00:11:15.797 }' 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.797 16:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.056 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.056 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.056 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 [2024-12-12 16:07:42.419184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.315 BaseBdev3 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 [ 00:11:16.315 { 00:11:16.315 "name": "BaseBdev3", 00:11:16.315 "aliases": [ 00:11:16.315 "a9234b83-11fe-4912-ad6d-d4bf72d904fc" 00:11:16.315 ], 00:11:16.315 "product_name": "Malloc disk", 00:11:16.315 "block_size": 512, 00:11:16.315 "num_blocks": 65536, 00:11:16.315 "uuid": "a9234b83-11fe-4912-ad6d-d4bf72d904fc", 00:11:16.315 "assigned_rate_limits": { 00:11:16.315 "rw_ios_per_sec": 0, 00:11:16.315 "rw_mbytes_per_sec": 0, 00:11:16.315 "r_mbytes_per_sec": 0, 00:11:16.315 "w_mbytes_per_sec": 0 00:11:16.315 }, 00:11:16.315 "claimed": true, 00:11:16.315 "claim_type": "exclusive_write", 00:11:16.315 "zoned": false, 00:11:16.315 "supported_io_types": { 00:11:16.315 "read": true, 00:11:16.315 "write": true, 00:11:16.315 "unmap": true, 00:11:16.315 "flush": true, 00:11:16.315 "reset": true, 00:11:16.315 "nvme_admin": false, 00:11:16.315 "nvme_io": false, 00:11:16.315 "nvme_io_md": false, 00:11:16.315 "write_zeroes": true, 00:11:16.315 "zcopy": true, 00:11:16.315 "get_zone_info": false, 00:11:16.315 "zone_management": false, 00:11:16.315 "zone_append": false, 00:11:16.315 "compare": false, 00:11:16.315 "compare_and_write": false, 00:11:16.315 "abort": true, 00:11:16.315 "seek_hole": false, 00:11:16.315 "seek_data": false, 00:11:16.315 "copy": true, 00:11:16.315 "nvme_iov_md": false 00:11:16.315 }, 00:11:16.315 "memory_domains": [ 00:11:16.315 { 00:11:16.315 "dma_device_id": "system", 00:11:16.315 "dma_device_type": 1 00:11:16.315 }, 00:11:16.315 { 00:11:16.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.315 "dma_device_type": 2 00:11:16.315 } 00:11:16.315 ], 00:11:16.315 "driver_specific": {} 00:11:16.315 } 00:11:16.315 ] 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.315 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.316 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.316 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.316 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.316 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.316 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.316 "name": "Existed_Raid", 00:11:16.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.316 "strip_size_kb": 64, 00:11:16.316 "state": "configuring", 00:11:16.316 "raid_level": "concat", 00:11:16.316 "superblock": false, 00:11:16.316 "num_base_bdevs": 4, 00:11:16.316 "num_base_bdevs_discovered": 3, 00:11:16.316 "num_base_bdevs_operational": 4, 00:11:16.316 "base_bdevs_list": [ 00:11:16.316 { 00:11:16.316 "name": "BaseBdev1", 00:11:16.316 "uuid": "66ad3272-bab0-492c-9609-846b7b232801", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 65536 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev2", 00:11:16.316 "uuid": "ce694f24-a77c-486d-9968-c1250d5d7230", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 65536 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev3", 00:11:16.316 "uuid": "a9234b83-11fe-4912-ad6d-d4bf72d904fc", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 65536 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev4", 00:11:16.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.316 "is_configured": false, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 0 00:11:16.316 } 00:11:16.316 ] 00:11:16.316 }' 00:11:16.316 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.316 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.575 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.575 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.575 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.834 [2024-12-12 16:07:42.965014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.834 [2024-12-12 16:07:42.965078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.834 [2024-12-12 16:07:42.965088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:16.834 [2024-12-12 16:07:42.965414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.834 [2024-12-12 16:07:42.965603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.834 [2024-12-12 16:07:42.965627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.834 [2024-12-12 16:07:42.965958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.834 BaseBdev4 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.834 [ 00:11:16.834 { 00:11:16.834 "name": "BaseBdev4", 00:11:16.834 "aliases": [ 00:11:16.834 "a24ec8c2-d641-4f82-85be-1a06f1ddb131" 00:11:16.834 ], 00:11:16.834 "product_name": "Malloc disk", 00:11:16.834 "block_size": 512, 00:11:16.834 "num_blocks": 65536, 00:11:16.834 "uuid": "a24ec8c2-d641-4f82-85be-1a06f1ddb131", 00:11:16.834 "assigned_rate_limits": { 00:11:16.834 "rw_ios_per_sec": 0, 00:11:16.834 "rw_mbytes_per_sec": 0, 00:11:16.834 "r_mbytes_per_sec": 0, 00:11:16.834 "w_mbytes_per_sec": 0 00:11:16.834 }, 00:11:16.834 "claimed": true, 00:11:16.834 "claim_type": "exclusive_write", 00:11:16.834 "zoned": false, 00:11:16.834 "supported_io_types": { 00:11:16.834 "read": true, 00:11:16.834 "write": true, 00:11:16.834 "unmap": true, 00:11:16.834 "flush": true, 00:11:16.834 "reset": true, 00:11:16.834 "nvme_admin": false, 00:11:16.834 "nvme_io": false, 00:11:16.834 "nvme_io_md": false, 00:11:16.834 "write_zeroes": true, 00:11:16.834 "zcopy": true, 00:11:16.834 "get_zone_info": false, 00:11:16.834 "zone_management": false, 00:11:16.834 "zone_append": false, 00:11:16.834 "compare": false, 00:11:16.834 "compare_and_write": false, 00:11:16.834 "abort": true, 00:11:16.834 "seek_hole": false, 00:11:16.834 "seek_data": false, 00:11:16.834 "copy": true, 00:11:16.834 "nvme_iov_md": false 00:11:16.834 }, 00:11:16.834 "memory_domains": [ 00:11:16.834 { 00:11:16.834 "dma_device_id": "system", 00:11:16.834 "dma_device_type": 1 00:11:16.834 }, 00:11:16.834 { 00:11:16.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.834 "dma_device_type": 2 00:11:16.834 } 00:11:16.834 ], 00:11:16.834 "driver_specific": {} 00:11:16.834 } 00:11:16.834 ] 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.834 16:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.834 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.834 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.834 "name": "Existed_Raid", 00:11:16.834 "uuid": "c194240d-01c2-4b7a-863f-5cae3973faf2", 00:11:16.834 "strip_size_kb": 64, 00:11:16.834 "state": "online", 00:11:16.834 "raid_level": "concat", 00:11:16.834 "superblock": false, 00:11:16.834 "num_base_bdevs": 4, 00:11:16.834 "num_base_bdevs_discovered": 4, 00:11:16.834 "num_base_bdevs_operational": 4, 00:11:16.834 "base_bdevs_list": [ 00:11:16.834 { 00:11:16.834 "name": "BaseBdev1", 00:11:16.834 "uuid": "66ad3272-bab0-492c-9609-846b7b232801", 00:11:16.834 "is_configured": true, 00:11:16.834 "data_offset": 0, 00:11:16.834 "data_size": 65536 00:11:16.834 }, 00:11:16.834 { 00:11:16.834 "name": "BaseBdev2", 00:11:16.834 "uuid": "ce694f24-a77c-486d-9968-c1250d5d7230", 00:11:16.834 "is_configured": true, 00:11:16.834 "data_offset": 0, 00:11:16.834 "data_size": 65536 00:11:16.834 }, 00:11:16.834 { 00:11:16.834 "name": "BaseBdev3", 00:11:16.835 "uuid": "a9234b83-11fe-4912-ad6d-d4bf72d904fc", 00:11:16.835 "is_configured": true, 00:11:16.835 "data_offset": 0, 00:11:16.835 "data_size": 65536 00:11:16.835 }, 00:11:16.835 { 00:11:16.835 "name": "BaseBdev4", 00:11:16.835 "uuid": "a24ec8c2-d641-4f82-85be-1a06f1ddb131", 00:11:16.835 "is_configured": true, 00:11:16.835 "data_offset": 0, 00:11:16.835 "data_size": 65536 00:11:16.835 } 00:11:16.835 ] 00:11:16.835 }' 00:11:16.835 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.835 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.094 [2024-12-12 16:07:43.420753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.094 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.353 "name": "Existed_Raid", 00:11:17.353 "aliases": [ 00:11:17.353 "c194240d-01c2-4b7a-863f-5cae3973faf2" 00:11:17.353 ], 00:11:17.353 "product_name": "Raid Volume", 00:11:17.353 "block_size": 512, 00:11:17.353 "num_blocks": 262144, 00:11:17.353 "uuid": "c194240d-01c2-4b7a-863f-5cae3973faf2", 00:11:17.353 "assigned_rate_limits": { 00:11:17.353 "rw_ios_per_sec": 0, 00:11:17.353 "rw_mbytes_per_sec": 0, 00:11:17.353 "r_mbytes_per_sec": 0, 00:11:17.353 "w_mbytes_per_sec": 0 00:11:17.353 }, 00:11:17.353 "claimed": false, 00:11:17.353 "zoned": false, 00:11:17.353 "supported_io_types": { 00:11:17.353 "read": true, 00:11:17.353 "write": true, 00:11:17.353 "unmap": true, 00:11:17.353 "flush": true, 00:11:17.353 "reset": true, 00:11:17.353 "nvme_admin": false, 00:11:17.353 "nvme_io": false, 00:11:17.353 "nvme_io_md": false, 00:11:17.353 "write_zeroes": true, 00:11:17.353 "zcopy": false, 00:11:17.353 "get_zone_info": false, 00:11:17.353 "zone_management": false, 00:11:17.353 "zone_append": false, 00:11:17.353 "compare": false, 00:11:17.353 "compare_and_write": false, 00:11:17.353 "abort": false, 00:11:17.353 "seek_hole": false, 00:11:17.353 "seek_data": false, 00:11:17.353 "copy": false, 00:11:17.353 "nvme_iov_md": false 00:11:17.353 }, 00:11:17.353 "memory_domains": [ 00:11:17.353 { 00:11:17.353 "dma_device_id": "system", 00:11:17.353 "dma_device_type": 1 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.353 "dma_device_type": 2 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "dma_device_id": "system", 00:11:17.353 "dma_device_type": 1 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.353 "dma_device_type": 2 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "dma_device_id": "system", 00:11:17.353 "dma_device_type": 1 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.353 "dma_device_type": 2 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "dma_device_id": "system", 00:11:17.353 "dma_device_type": 1 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.353 "dma_device_type": 2 00:11:17.353 } 00:11:17.353 ], 00:11:17.353 "driver_specific": { 00:11:17.353 "raid": { 00:11:17.353 "uuid": "c194240d-01c2-4b7a-863f-5cae3973faf2", 00:11:17.353 "strip_size_kb": 64, 00:11:17.353 "state": "online", 00:11:17.353 "raid_level": "concat", 00:11:17.353 "superblock": false, 00:11:17.353 "num_base_bdevs": 4, 00:11:17.353 "num_base_bdevs_discovered": 4, 00:11:17.353 "num_base_bdevs_operational": 4, 00:11:17.353 "base_bdevs_list": [ 00:11:17.353 { 00:11:17.353 "name": "BaseBdev1", 00:11:17.353 "uuid": "66ad3272-bab0-492c-9609-846b7b232801", 00:11:17.353 "is_configured": true, 00:11:17.353 "data_offset": 0, 00:11:17.353 "data_size": 65536 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "name": "BaseBdev2", 00:11:17.353 "uuid": "ce694f24-a77c-486d-9968-c1250d5d7230", 00:11:17.353 "is_configured": true, 00:11:17.353 "data_offset": 0, 00:11:17.353 "data_size": 65536 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "name": "BaseBdev3", 00:11:17.353 "uuid": "a9234b83-11fe-4912-ad6d-d4bf72d904fc", 00:11:17.353 "is_configured": true, 00:11:17.353 "data_offset": 0, 00:11:17.353 "data_size": 65536 00:11:17.353 }, 00:11:17.353 { 00:11:17.353 "name": "BaseBdev4", 00:11:17.353 "uuid": "a24ec8c2-d641-4f82-85be-1a06f1ddb131", 00:11:17.353 "is_configured": true, 00:11:17.353 "data_offset": 0, 00:11:17.353 "data_size": 65536 00:11:17.353 } 00:11:17.353 ] 00:11:17.353 } 00:11:17.353 } 00:11:17.353 }' 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.353 BaseBdev2 00:11:17.353 BaseBdev3 00:11:17.353 BaseBdev4' 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.353 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.354 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.613 [2024-12-12 16:07:43.779949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.613 [2024-12-12 16:07:43.779987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.613 [2024-12-12 16:07:43.780050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.613 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.614 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.614 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.614 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.614 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.614 "name": "Existed_Raid", 00:11:17.614 "uuid": "c194240d-01c2-4b7a-863f-5cae3973faf2", 00:11:17.614 "strip_size_kb": 64, 00:11:17.614 "state": "offline", 00:11:17.614 "raid_level": "concat", 00:11:17.614 "superblock": false, 00:11:17.614 "num_base_bdevs": 4, 00:11:17.614 "num_base_bdevs_discovered": 3, 00:11:17.614 "num_base_bdevs_operational": 3, 00:11:17.614 "base_bdevs_list": [ 00:11:17.614 { 00:11:17.614 "name": null, 00:11:17.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.614 "is_configured": false, 00:11:17.614 "data_offset": 0, 00:11:17.614 "data_size": 65536 00:11:17.614 }, 00:11:17.614 { 00:11:17.614 "name": "BaseBdev2", 00:11:17.614 "uuid": "ce694f24-a77c-486d-9968-c1250d5d7230", 00:11:17.614 "is_configured": true, 00:11:17.614 "data_offset": 0, 00:11:17.614 "data_size": 65536 00:11:17.614 }, 00:11:17.614 { 00:11:17.614 "name": "BaseBdev3", 00:11:17.614 "uuid": "a9234b83-11fe-4912-ad6d-d4bf72d904fc", 00:11:17.614 "is_configured": true, 00:11:17.614 "data_offset": 0, 00:11:17.614 "data_size": 65536 00:11:17.614 }, 00:11:17.614 { 00:11:17.614 "name": "BaseBdev4", 00:11:17.614 "uuid": "a24ec8c2-d641-4f82-85be-1a06f1ddb131", 00:11:17.614 "is_configured": true, 00:11:17.614 "data_offset": 0, 00:11:17.614 "data_size": 65536 00:11:17.614 } 00:11:17.614 ] 00:11:17.614 }' 00:11:17.614 16:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.614 16:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.181 [2024-12-12 16:07:44.410381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.181 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.442 [2024-12-12 16:07:44.582315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.442 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.442 [2024-12-12 16:07:44.752179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:18.442 [2024-12-12 16:07:44.752247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.722 BaseBdev2 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.722 [ 00:11:18.722 { 00:11:18.722 "name": "BaseBdev2", 00:11:18.722 "aliases": [ 00:11:18.722 "ada5730d-34f7-426c-90ba-02d6321cd963" 00:11:18.722 ], 00:11:18.722 "product_name": "Malloc disk", 00:11:18.722 "block_size": 512, 00:11:18.722 "num_blocks": 65536, 00:11:18.722 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:18.722 "assigned_rate_limits": { 00:11:18.722 "rw_ios_per_sec": 0, 00:11:18.722 "rw_mbytes_per_sec": 0, 00:11:18.722 "r_mbytes_per_sec": 0, 00:11:18.722 "w_mbytes_per_sec": 0 00:11:18.722 }, 00:11:18.722 "claimed": false, 00:11:18.722 "zoned": false, 00:11:18.722 "supported_io_types": { 00:11:18.722 "read": true, 00:11:18.722 "write": true, 00:11:18.722 "unmap": true, 00:11:18.722 "flush": true, 00:11:18.722 "reset": true, 00:11:18.722 "nvme_admin": false, 00:11:18.722 "nvme_io": false, 00:11:18.722 "nvme_io_md": false, 00:11:18.722 "write_zeroes": true, 00:11:18.722 "zcopy": true, 00:11:18.722 "get_zone_info": false, 00:11:18.722 "zone_management": false, 00:11:18.722 "zone_append": false, 00:11:18.722 "compare": false, 00:11:18.722 "compare_and_write": false, 00:11:18.722 "abort": true, 00:11:18.722 "seek_hole": false, 00:11:18.722 "seek_data": false, 00:11:18.722 "copy": true, 00:11:18.722 "nvme_iov_md": false 00:11:18.722 }, 00:11:18.722 "memory_domains": [ 00:11:18.722 { 00:11:18.722 "dma_device_id": "system", 00:11:18.722 "dma_device_type": 1 00:11:18.722 }, 00:11:18.722 { 00:11:18.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.722 "dma_device_type": 2 00:11:18.722 } 00:11:18.722 ], 00:11:18.722 "driver_specific": {} 00:11:18.722 } 00:11:18.722 ] 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.722 16:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.722 BaseBdev3 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.722 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.722 [ 00:11:18.722 { 00:11:18.722 "name": "BaseBdev3", 00:11:18.722 "aliases": [ 00:11:18.722 "9300a927-8bc0-4f65-814f-7133e0d359aa" 00:11:18.722 ], 00:11:18.722 "product_name": "Malloc disk", 00:11:18.722 "block_size": 512, 00:11:18.722 "num_blocks": 65536, 00:11:18.722 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:18.722 "assigned_rate_limits": { 00:11:18.723 "rw_ios_per_sec": 0, 00:11:18.723 "rw_mbytes_per_sec": 0, 00:11:18.723 "r_mbytes_per_sec": 0, 00:11:18.723 "w_mbytes_per_sec": 0 00:11:18.723 }, 00:11:18.723 "claimed": false, 00:11:18.723 "zoned": false, 00:11:18.723 "supported_io_types": { 00:11:18.723 "read": true, 00:11:18.723 "write": true, 00:11:18.723 "unmap": true, 00:11:18.723 "flush": true, 00:11:18.723 "reset": true, 00:11:18.723 "nvme_admin": false, 00:11:18.723 "nvme_io": false, 00:11:18.723 "nvme_io_md": false, 00:11:18.723 "write_zeroes": true, 00:11:18.723 "zcopy": true, 00:11:18.723 "get_zone_info": false, 00:11:18.723 "zone_management": false, 00:11:18.723 "zone_append": false, 00:11:18.723 "compare": false, 00:11:18.723 "compare_and_write": false, 00:11:18.723 "abort": true, 00:11:18.723 "seek_hole": false, 00:11:18.723 "seek_data": false, 00:11:18.723 "copy": true, 00:11:18.723 "nvme_iov_md": false 00:11:18.723 }, 00:11:18.723 "memory_domains": [ 00:11:18.723 { 00:11:18.723 "dma_device_id": "system", 00:11:18.723 "dma_device_type": 1 00:11:18.723 }, 00:11:18.723 { 00:11:18.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.723 "dma_device_type": 2 00:11:18.723 } 00:11:18.723 ], 00:11:18.723 "driver_specific": {} 00:11:18.723 } 00:11:18.723 ] 00:11:18.723 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.723 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.723 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.723 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.723 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.723 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.723 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.983 BaseBdev4 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.983 [ 00:11:18.983 { 00:11:18.983 "name": "BaseBdev4", 00:11:18.983 "aliases": [ 00:11:18.983 "7e969d7a-bab7-4d5a-a0ca-608835864433" 00:11:18.983 ], 00:11:18.983 "product_name": "Malloc disk", 00:11:18.983 "block_size": 512, 00:11:18.983 "num_blocks": 65536, 00:11:18.983 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:18.983 "assigned_rate_limits": { 00:11:18.983 "rw_ios_per_sec": 0, 00:11:18.983 "rw_mbytes_per_sec": 0, 00:11:18.983 "r_mbytes_per_sec": 0, 00:11:18.983 "w_mbytes_per_sec": 0 00:11:18.983 }, 00:11:18.983 "claimed": false, 00:11:18.983 "zoned": false, 00:11:18.983 "supported_io_types": { 00:11:18.983 "read": true, 00:11:18.983 "write": true, 00:11:18.983 "unmap": true, 00:11:18.983 "flush": true, 00:11:18.983 "reset": true, 00:11:18.983 "nvme_admin": false, 00:11:18.983 "nvme_io": false, 00:11:18.983 "nvme_io_md": false, 00:11:18.983 "write_zeroes": true, 00:11:18.983 "zcopy": true, 00:11:18.983 "get_zone_info": false, 00:11:18.983 "zone_management": false, 00:11:18.983 "zone_append": false, 00:11:18.983 "compare": false, 00:11:18.983 "compare_and_write": false, 00:11:18.983 "abort": true, 00:11:18.983 "seek_hole": false, 00:11:18.983 "seek_data": false, 00:11:18.983 "copy": true, 00:11:18.983 "nvme_iov_md": false 00:11:18.983 }, 00:11:18.983 "memory_domains": [ 00:11:18.983 { 00:11:18.983 "dma_device_id": "system", 00:11:18.983 "dma_device_type": 1 00:11:18.983 }, 00:11:18.983 { 00:11:18.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.983 "dma_device_type": 2 00:11:18.983 } 00:11:18.983 ], 00:11:18.983 "driver_specific": {} 00:11:18.983 } 00:11:18.983 ] 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.983 [2024-12-12 16:07:45.149464] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.983 [2024-12-12 16:07:45.149563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.983 [2024-12-12 16:07:45.149617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.983 [2024-12-12 16:07:45.151804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.983 [2024-12-12 16:07:45.151922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.983 "name": "Existed_Raid", 00:11:18.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.983 "strip_size_kb": 64, 00:11:18.983 "state": "configuring", 00:11:18.983 "raid_level": "concat", 00:11:18.983 "superblock": false, 00:11:18.983 "num_base_bdevs": 4, 00:11:18.983 "num_base_bdevs_discovered": 3, 00:11:18.983 "num_base_bdevs_operational": 4, 00:11:18.983 "base_bdevs_list": [ 00:11:18.983 { 00:11:18.983 "name": "BaseBdev1", 00:11:18.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.983 "is_configured": false, 00:11:18.983 "data_offset": 0, 00:11:18.983 "data_size": 0 00:11:18.983 }, 00:11:18.983 { 00:11:18.983 "name": "BaseBdev2", 00:11:18.983 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:18.983 "is_configured": true, 00:11:18.983 "data_offset": 0, 00:11:18.983 "data_size": 65536 00:11:18.983 }, 00:11:18.983 { 00:11:18.983 "name": "BaseBdev3", 00:11:18.983 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:18.983 "is_configured": true, 00:11:18.983 "data_offset": 0, 00:11:18.983 "data_size": 65536 00:11:18.983 }, 00:11:18.983 { 00:11:18.983 "name": "BaseBdev4", 00:11:18.983 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:18.983 "is_configured": true, 00:11:18.983 "data_offset": 0, 00:11:18.983 "data_size": 65536 00:11:18.983 } 00:11:18.983 ] 00:11:18.983 }' 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.983 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.551 [2024-12-12 16:07:45.620786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.551 "name": "Existed_Raid", 00:11:19.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.551 "strip_size_kb": 64, 00:11:19.551 "state": "configuring", 00:11:19.551 "raid_level": "concat", 00:11:19.551 "superblock": false, 00:11:19.551 "num_base_bdevs": 4, 00:11:19.551 "num_base_bdevs_discovered": 2, 00:11:19.551 "num_base_bdevs_operational": 4, 00:11:19.551 "base_bdevs_list": [ 00:11:19.551 { 00:11:19.551 "name": "BaseBdev1", 00:11:19.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.551 "is_configured": false, 00:11:19.551 "data_offset": 0, 00:11:19.551 "data_size": 0 00:11:19.551 }, 00:11:19.551 { 00:11:19.551 "name": null, 00:11:19.551 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:19.551 "is_configured": false, 00:11:19.551 "data_offset": 0, 00:11:19.551 "data_size": 65536 00:11:19.551 }, 00:11:19.551 { 00:11:19.551 "name": "BaseBdev3", 00:11:19.551 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:19.551 "is_configured": true, 00:11:19.551 "data_offset": 0, 00:11:19.551 "data_size": 65536 00:11:19.551 }, 00:11:19.551 { 00:11:19.551 "name": "BaseBdev4", 00:11:19.551 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:19.551 "is_configured": true, 00:11:19.551 "data_offset": 0, 00:11:19.551 "data_size": 65536 00:11:19.551 } 00:11:19.551 ] 00:11:19.551 }' 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.551 16:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.810 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.070 [2024-12-12 16:07:46.169522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.070 BaseBdev1 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.070 [ 00:11:20.070 { 00:11:20.070 "name": "BaseBdev1", 00:11:20.070 "aliases": [ 00:11:20.070 "f4c076bb-e290-4c3d-a37e-67389e1adf76" 00:11:20.070 ], 00:11:20.070 "product_name": "Malloc disk", 00:11:20.070 "block_size": 512, 00:11:20.070 "num_blocks": 65536, 00:11:20.070 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:20.070 "assigned_rate_limits": { 00:11:20.070 "rw_ios_per_sec": 0, 00:11:20.070 "rw_mbytes_per_sec": 0, 00:11:20.070 "r_mbytes_per_sec": 0, 00:11:20.070 "w_mbytes_per_sec": 0 00:11:20.070 }, 00:11:20.070 "claimed": true, 00:11:20.070 "claim_type": "exclusive_write", 00:11:20.070 "zoned": false, 00:11:20.070 "supported_io_types": { 00:11:20.070 "read": true, 00:11:20.070 "write": true, 00:11:20.070 "unmap": true, 00:11:20.070 "flush": true, 00:11:20.070 "reset": true, 00:11:20.070 "nvme_admin": false, 00:11:20.070 "nvme_io": false, 00:11:20.070 "nvme_io_md": false, 00:11:20.070 "write_zeroes": true, 00:11:20.070 "zcopy": true, 00:11:20.070 "get_zone_info": false, 00:11:20.070 "zone_management": false, 00:11:20.070 "zone_append": false, 00:11:20.070 "compare": false, 00:11:20.070 "compare_and_write": false, 00:11:20.070 "abort": true, 00:11:20.070 "seek_hole": false, 00:11:20.070 "seek_data": false, 00:11:20.070 "copy": true, 00:11:20.070 "nvme_iov_md": false 00:11:20.070 }, 00:11:20.070 "memory_domains": [ 00:11:20.070 { 00:11:20.070 "dma_device_id": "system", 00:11:20.070 "dma_device_type": 1 00:11:20.070 }, 00:11:20.070 { 00:11:20.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.070 "dma_device_type": 2 00:11:20.070 } 00:11:20.070 ], 00:11:20.070 "driver_specific": {} 00:11:20.070 } 00:11:20.070 ] 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.070 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.071 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.071 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.071 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.071 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.071 "name": "Existed_Raid", 00:11:20.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.071 "strip_size_kb": 64, 00:11:20.071 "state": "configuring", 00:11:20.071 "raid_level": "concat", 00:11:20.071 "superblock": false, 00:11:20.071 "num_base_bdevs": 4, 00:11:20.071 "num_base_bdevs_discovered": 3, 00:11:20.071 "num_base_bdevs_operational": 4, 00:11:20.071 "base_bdevs_list": [ 00:11:20.071 { 00:11:20.071 "name": "BaseBdev1", 00:11:20.071 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:20.071 "is_configured": true, 00:11:20.071 "data_offset": 0, 00:11:20.071 "data_size": 65536 00:11:20.071 }, 00:11:20.071 { 00:11:20.071 "name": null, 00:11:20.071 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:20.071 "is_configured": false, 00:11:20.071 "data_offset": 0, 00:11:20.071 "data_size": 65536 00:11:20.071 }, 00:11:20.071 { 00:11:20.071 "name": "BaseBdev3", 00:11:20.071 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:20.071 "is_configured": true, 00:11:20.071 "data_offset": 0, 00:11:20.071 "data_size": 65536 00:11:20.071 }, 00:11:20.071 { 00:11:20.071 "name": "BaseBdev4", 00:11:20.071 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:20.071 "is_configured": true, 00:11:20.071 "data_offset": 0, 00:11:20.071 "data_size": 65536 00:11:20.071 } 00:11:20.071 ] 00:11:20.071 }' 00:11:20.071 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.071 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.329 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.329 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.329 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.329 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.329 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.587 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.588 [2024-12-12 16:07:46.700757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.588 "name": "Existed_Raid", 00:11:20.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.588 "strip_size_kb": 64, 00:11:20.588 "state": "configuring", 00:11:20.588 "raid_level": "concat", 00:11:20.588 "superblock": false, 00:11:20.588 "num_base_bdevs": 4, 00:11:20.588 "num_base_bdevs_discovered": 2, 00:11:20.588 "num_base_bdevs_operational": 4, 00:11:20.588 "base_bdevs_list": [ 00:11:20.588 { 00:11:20.588 "name": "BaseBdev1", 00:11:20.588 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:20.588 "is_configured": true, 00:11:20.588 "data_offset": 0, 00:11:20.588 "data_size": 65536 00:11:20.588 }, 00:11:20.588 { 00:11:20.588 "name": null, 00:11:20.588 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:20.588 "is_configured": false, 00:11:20.588 "data_offset": 0, 00:11:20.588 "data_size": 65536 00:11:20.588 }, 00:11:20.588 { 00:11:20.588 "name": null, 00:11:20.588 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:20.588 "is_configured": false, 00:11:20.588 "data_offset": 0, 00:11:20.588 "data_size": 65536 00:11:20.588 }, 00:11:20.588 { 00:11:20.588 "name": "BaseBdev4", 00:11:20.588 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:20.588 "is_configured": true, 00:11:20.588 "data_offset": 0, 00:11:20.588 "data_size": 65536 00:11:20.588 } 00:11:20.588 ] 00:11:20.588 }' 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.588 16:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.845 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.845 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.845 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.845 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.845 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.104 [2024-12-12 16:07:47.227840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.104 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.105 "name": "Existed_Raid", 00:11:21.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.105 "strip_size_kb": 64, 00:11:21.105 "state": "configuring", 00:11:21.105 "raid_level": "concat", 00:11:21.105 "superblock": false, 00:11:21.105 "num_base_bdevs": 4, 00:11:21.105 "num_base_bdevs_discovered": 3, 00:11:21.105 "num_base_bdevs_operational": 4, 00:11:21.105 "base_bdevs_list": [ 00:11:21.105 { 00:11:21.105 "name": "BaseBdev1", 00:11:21.105 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:21.105 "is_configured": true, 00:11:21.105 "data_offset": 0, 00:11:21.105 "data_size": 65536 00:11:21.105 }, 00:11:21.105 { 00:11:21.105 "name": null, 00:11:21.105 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:21.105 "is_configured": false, 00:11:21.105 "data_offset": 0, 00:11:21.105 "data_size": 65536 00:11:21.105 }, 00:11:21.105 { 00:11:21.105 "name": "BaseBdev3", 00:11:21.105 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:21.105 "is_configured": true, 00:11:21.105 "data_offset": 0, 00:11:21.105 "data_size": 65536 00:11:21.105 }, 00:11:21.105 { 00:11:21.105 "name": "BaseBdev4", 00:11:21.105 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:21.105 "is_configured": true, 00:11:21.105 "data_offset": 0, 00:11:21.105 "data_size": 65536 00:11:21.105 } 00:11:21.105 ] 00:11:21.105 }' 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.105 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.672 [2024-12-12 16:07:47.771051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.672 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.673 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.673 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.673 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.673 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.673 "name": "Existed_Raid", 00:11:21.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.673 "strip_size_kb": 64, 00:11:21.673 "state": "configuring", 00:11:21.673 "raid_level": "concat", 00:11:21.673 "superblock": false, 00:11:21.673 "num_base_bdevs": 4, 00:11:21.673 "num_base_bdevs_discovered": 2, 00:11:21.673 "num_base_bdevs_operational": 4, 00:11:21.673 "base_bdevs_list": [ 00:11:21.673 { 00:11:21.673 "name": null, 00:11:21.673 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:21.673 "is_configured": false, 00:11:21.673 "data_offset": 0, 00:11:21.673 "data_size": 65536 00:11:21.673 }, 00:11:21.673 { 00:11:21.673 "name": null, 00:11:21.673 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:21.673 "is_configured": false, 00:11:21.673 "data_offset": 0, 00:11:21.673 "data_size": 65536 00:11:21.673 }, 00:11:21.673 { 00:11:21.673 "name": "BaseBdev3", 00:11:21.673 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:21.673 "is_configured": true, 00:11:21.673 "data_offset": 0, 00:11:21.673 "data_size": 65536 00:11:21.673 }, 00:11:21.673 { 00:11:21.673 "name": "BaseBdev4", 00:11:21.673 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:21.673 "is_configured": true, 00:11:21.673 "data_offset": 0, 00:11:21.673 "data_size": 65536 00:11:21.673 } 00:11:21.673 ] 00:11:21.673 }' 00:11:21.673 16:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.673 16:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.241 [2024-12-12 16:07:48.419652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.241 "name": "Existed_Raid", 00:11:22.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.241 "strip_size_kb": 64, 00:11:22.241 "state": "configuring", 00:11:22.241 "raid_level": "concat", 00:11:22.241 "superblock": false, 00:11:22.241 "num_base_bdevs": 4, 00:11:22.241 "num_base_bdevs_discovered": 3, 00:11:22.241 "num_base_bdevs_operational": 4, 00:11:22.241 "base_bdevs_list": [ 00:11:22.241 { 00:11:22.241 "name": null, 00:11:22.241 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:22.241 "is_configured": false, 00:11:22.241 "data_offset": 0, 00:11:22.241 "data_size": 65536 00:11:22.241 }, 00:11:22.241 { 00:11:22.241 "name": "BaseBdev2", 00:11:22.241 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:22.241 "is_configured": true, 00:11:22.241 "data_offset": 0, 00:11:22.241 "data_size": 65536 00:11:22.241 }, 00:11:22.241 { 00:11:22.241 "name": "BaseBdev3", 00:11:22.241 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:22.241 "is_configured": true, 00:11:22.241 "data_offset": 0, 00:11:22.241 "data_size": 65536 00:11:22.241 }, 00:11:22.241 { 00:11:22.241 "name": "BaseBdev4", 00:11:22.241 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:22.241 "is_configured": true, 00:11:22.241 "data_offset": 0, 00:11:22.241 "data_size": 65536 00:11:22.241 } 00:11:22.241 ] 00:11:22.241 }' 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.241 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f4c076bb-e290-4c3d-a37e-67389e1adf76 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.811 16:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 [2024-12-12 16:07:49.031868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.811 [2024-12-12 16:07:49.032008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.811 [2024-12-12 16:07:49.032023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:22.811 [2024-12-12 16:07:49.032331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:22.811 [2024-12-12 16:07:49.032486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.811 [2024-12-12 16:07:49.032511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.811 [2024-12-12 16:07:49.032770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.811 NewBaseBdev 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.811 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 [ 00:11:22.811 { 00:11:22.811 "name": "NewBaseBdev", 00:11:22.811 "aliases": [ 00:11:22.811 "f4c076bb-e290-4c3d-a37e-67389e1adf76" 00:11:22.811 ], 00:11:22.811 "product_name": "Malloc disk", 00:11:22.811 "block_size": 512, 00:11:22.811 "num_blocks": 65536, 00:11:22.811 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:22.811 "assigned_rate_limits": { 00:11:22.811 "rw_ios_per_sec": 0, 00:11:22.811 "rw_mbytes_per_sec": 0, 00:11:22.811 "r_mbytes_per_sec": 0, 00:11:22.811 "w_mbytes_per_sec": 0 00:11:22.811 }, 00:11:22.811 "claimed": true, 00:11:22.811 "claim_type": "exclusive_write", 00:11:22.811 "zoned": false, 00:11:22.811 "supported_io_types": { 00:11:22.811 "read": true, 00:11:22.811 "write": true, 00:11:22.811 "unmap": true, 00:11:22.811 "flush": true, 00:11:22.811 "reset": true, 00:11:22.811 "nvme_admin": false, 00:11:22.811 "nvme_io": false, 00:11:22.811 "nvme_io_md": false, 00:11:22.811 "write_zeroes": true, 00:11:22.811 "zcopy": true, 00:11:22.811 "get_zone_info": false, 00:11:22.811 "zone_management": false, 00:11:22.811 "zone_append": false, 00:11:22.811 "compare": false, 00:11:22.811 "compare_and_write": false, 00:11:22.811 "abort": true, 00:11:22.811 "seek_hole": false, 00:11:22.811 "seek_data": false, 00:11:22.811 "copy": true, 00:11:22.811 "nvme_iov_md": false 00:11:22.811 }, 00:11:22.811 "memory_domains": [ 00:11:22.811 { 00:11:22.811 "dma_device_id": "system", 00:11:22.811 "dma_device_type": 1 00:11:22.811 }, 00:11:22.811 { 00:11:22.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.811 "dma_device_type": 2 00:11:22.811 } 00:11:22.811 ], 00:11:22.811 "driver_specific": {} 00:11:22.811 } 00:11:22.812 ] 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.812 "name": "Existed_Raid", 00:11:22.812 "uuid": "a0ca001a-ded6-4a8f-9268-685eacba1ea2", 00:11:22.812 "strip_size_kb": 64, 00:11:22.812 "state": "online", 00:11:22.812 "raid_level": "concat", 00:11:22.812 "superblock": false, 00:11:22.812 "num_base_bdevs": 4, 00:11:22.812 "num_base_bdevs_discovered": 4, 00:11:22.812 "num_base_bdevs_operational": 4, 00:11:22.812 "base_bdevs_list": [ 00:11:22.812 { 00:11:22.812 "name": "NewBaseBdev", 00:11:22.812 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:22.812 "is_configured": true, 00:11:22.812 "data_offset": 0, 00:11:22.812 "data_size": 65536 00:11:22.812 }, 00:11:22.812 { 00:11:22.812 "name": "BaseBdev2", 00:11:22.812 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:22.812 "is_configured": true, 00:11:22.812 "data_offset": 0, 00:11:22.812 "data_size": 65536 00:11:22.812 }, 00:11:22.812 { 00:11:22.812 "name": "BaseBdev3", 00:11:22.812 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:22.812 "is_configured": true, 00:11:22.812 "data_offset": 0, 00:11:22.812 "data_size": 65536 00:11:22.812 }, 00:11:22.812 { 00:11:22.812 "name": "BaseBdev4", 00:11:22.812 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:22.812 "is_configured": true, 00:11:22.812 "data_offset": 0, 00:11:22.812 "data_size": 65536 00:11:22.812 } 00:11:22.812 ] 00:11:22.812 }' 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.812 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.382 [2024-12-12 16:07:49.475540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.382 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.382 "name": "Existed_Raid", 00:11:23.382 "aliases": [ 00:11:23.382 "a0ca001a-ded6-4a8f-9268-685eacba1ea2" 00:11:23.382 ], 00:11:23.382 "product_name": "Raid Volume", 00:11:23.382 "block_size": 512, 00:11:23.382 "num_blocks": 262144, 00:11:23.382 "uuid": "a0ca001a-ded6-4a8f-9268-685eacba1ea2", 00:11:23.382 "assigned_rate_limits": { 00:11:23.382 "rw_ios_per_sec": 0, 00:11:23.382 "rw_mbytes_per_sec": 0, 00:11:23.382 "r_mbytes_per_sec": 0, 00:11:23.382 "w_mbytes_per_sec": 0 00:11:23.382 }, 00:11:23.382 "claimed": false, 00:11:23.382 "zoned": false, 00:11:23.382 "supported_io_types": { 00:11:23.382 "read": true, 00:11:23.382 "write": true, 00:11:23.382 "unmap": true, 00:11:23.382 "flush": true, 00:11:23.382 "reset": true, 00:11:23.382 "nvme_admin": false, 00:11:23.382 "nvme_io": false, 00:11:23.382 "nvme_io_md": false, 00:11:23.382 "write_zeroes": true, 00:11:23.382 "zcopy": false, 00:11:23.382 "get_zone_info": false, 00:11:23.382 "zone_management": false, 00:11:23.382 "zone_append": false, 00:11:23.382 "compare": false, 00:11:23.382 "compare_and_write": false, 00:11:23.382 "abort": false, 00:11:23.382 "seek_hole": false, 00:11:23.382 "seek_data": false, 00:11:23.382 "copy": false, 00:11:23.382 "nvme_iov_md": false 00:11:23.382 }, 00:11:23.382 "memory_domains": [ 00:11:23.382 { 00:11:23.382 "dma_device_id": "system", 00:11:23.382 "dma_device_type": 1 00:11:23.382 }, 00:11:23.382 { 00:11:23.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.382 "dma_device_type": 2 00:11:23.382 }, 00:11:23.382 { 00:11:23.382 "dma_device_id": "system", 00:11:23.382 "dma_device_type": 1 00:11:23.382 }, 00:11:23.382 { 00:11:23.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.382 "dma_device_type": 2 00:11:23.382 }, 00:11:23.382 { 00:11:23.382 "dma_device_id": "system", 00:11:23.382 "dma_device_type": 1 00:11:23.382 }, 00:11:23.382 { 00:11:23.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.382 "dma_device_type": 2 00:11:23.382 }, 00:11:23.382 { 00:11:23.382 "dma_device_id": "system", 00:11:23.382 "dma_device_type": 1 00:11:23.382 }, 00:11:23.382 { 00:11:23.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.382 "dma_device_type": 2 00:11:23.382 } 00:11:23.382 ], 00:11:23.382 "driver_specific": { 00:11:23.382 "raid": { 00:11:23.382 "uuid": "a0ca001a-ded6-4a8f-9268-685eacba1ea2", 00:11:23.382 "strip_size_kb": 64, 00:11:23.382 "state": "online", 00:11:23.383 "raid_level": "concat", 00:11:23.383 "superblock": false, 00:11:23.383 "num_base_bdevs": 4, 00:11:23.383 "num_base_bdevs_discovered": 4, 00:11:23.383 "num_base_bdevs_operational": 4, 00:11:23.383 "base_bdevs_list": [ 00:11:23.383 { 00:11:23.383 "name": "NewBaseBdev", 00:11:23.383 "uuid": "f4c076bb-e290-4c3d-a37e-67389e1adf76", 00:11:23.383 "is_configured": true, 00:11:23.383 "data_offset": 0, 00:11:23.383 "data_size": 65536 00:11:23.383 }, 00:11:23.383 { 00:11:23.383 "name": "BaseBdev2", 00:11:23.383 "uuid": "ada5730d-34f7-426c-90ba-02d6321cd963", 00:11:23.383 "is_configured": true, 00:11:23.383 "data_offset": 0, 00:11:23.383 "data_size": 65536 00:11:23.383 }, 00:11:23.383 { 00:11:23.383 "name": "BaseBdev3", 00:11:23.383 "uuid": "9300a927-8bc0-4f65-814f-7133e0d359aa", 00:11:23.383 "is_configured": true, 00:11:23.383 "data_offset": 0, 00:11:23.383 "data_size": 65536 00:11:23.383 }, 00:11:23.383 { 00:11:23.383 "name": "BaseBdev4", 00:11:23.383 "uuid": "7e969d7a-bab7-4d5a-a0ca-608835864433", 00:11:23.383 "is_configured": true, 00:11:23.383 "data_offset": 0, 00:11:23.383 "data_size": 65536 00:11:23.383 } 00:11:23.383 ] 00:11:23.383 } 00:11:23.383 } 00:11:23.383 }' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:23.383 BaseBdev2 00:11:23.383 BaseBdev3 00:11:23.383 BaseBdev4' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.383 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.643 [2024-12-12 16:07:49.818606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.643 [2024-12-12 16:07:49.818637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.643 [2024-12-12 16:07:49.818720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.643 [2024-12-12 16:07:49.818792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.643 [2024-12-12 16:07:49.818802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73317 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73317 ']' 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73317 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73317 00:11:23.643 killing process with pid 73317 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.643 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73317' 00:11:23.644 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73317 00:11:23.644 [2024-12-12 16:07:49.867156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.644 16:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73317 00:11:24.213 [2024-12-12 16:07:50.256276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.154 00:11:25.154 real 0m11.996s 00:11:25.154 user 0m19.195s 00:11:25.154 sys 0m2.009s 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.154 ************************************ 00:11:25.154 END TEST raid_state_function_test 00:11:25.154 ************************************ 00:11:25.154 16:07:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:25.154 16:07:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.154 16:07:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.154 16:07:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.154 ************************************ 00:11:25.154 START TEST raid_state_function_test_sb 00:11:25.154 ************************************ 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.154 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73994 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73994' 00:11:25.155 Process raid pid: 73994 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73994 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73994 ']' 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.155 16:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.414 [2024-12-12 16:07:51.528701] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:25.414 [2024-12-12 16:07:51.528917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.414 [2024-12-12 16:07:51.684851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.674 [2024-12-12 16:07:51.805863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.674 [2024-12-12 16:07:52.008316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.674 [2024-12-12 16:07:52.008362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.243 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.244 [2024-12-12 16:07:52.371498] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.244 [2024-12-12 16:07:52.371557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.244 [2024-12-12 16:07:52.371569] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.244 [2024-12-12 16:07:52.371581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.244 [2024-12-12 16:07:52.371597] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.244 [2024-12-12 16:07:52.371607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.244 [2024-12-12 16:07:52.371619] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.244 [2024-12-12 16:07:52.371629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.244 "name": "Existed_Raid", 00:11:26.244 "uuid": "8078fb74-48c9-4378-b46c-c7ce63ea4f88", 00:11:26.244 "strip_size_kb": 64, 00:11:26.244 "state": "configuring", 00:11:26.244 "raid_level": "concat", 00:11:26.244 "superblock": true, 00:11:26.244 "num_base_bdevs": 4, 00:11:26.244 "num_base_bdevs_discovered": 0, 00:11:26.244 "num_base_bdevs_operational": 4, 00:11:26.244 "base_bdevs_list": [ 00:11:26.244 { 00:11:26.244 "name": "BaseBdev1", 00:11:26.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.244 "is_configured": false, 00:11:26.244 "data_offset": 0, 00:11:26.244 "data_size": 0 00:11:26.244 }, 00:11:26.244 { 00:11:26.244 "name": "BaseBdev2", 00:11:26.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.244 "is_configured": false, 00:11:26.244 "data_offset": 0, 00:11:26.244 "data_size": 0 00:11:26.244 }, 00:11:26.244 { 00:11:26.244 "name": "BaseBdev3", 00:11:26.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.244 "is_configured": false, 00:11:26.244 "data_offset": 0, 00:11:26.244 "data_size": 0 00:11:26.244 }, 00:11:26.244 { 00:11:26.244 "name": "BaseBdev4", 00:11:26.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.244 "is_configured": false, 00:11:26.244 "data_offset": 0, 00:11:26.244 "data_size": 0 00:11:26.244 } 00:11:26.244 ] 00:11:26.244 }' 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.244 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.504 [2024-12-12 16:07:52.782714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.504 [2024-12-12 16:07:52.782755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.504 [2024-12-12 16:07:52.790698] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.504 [2024-12-12 16:07:52.790739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.504 [2024-12-12 16:07:52.790749] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.504 [2024-12-12 16:07:52.790758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.504 [2024-12-12 16:07:52.790764] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.504 [2024-12-12 16:07:52.790772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.504 [2024-12-12 16:07:52.790779] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.504 [2024-12-12 16:07:52.790787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.504 [2024-12-12 16:07:52.832948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.504 BaseBdev1 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.504 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 [ 00:11:26.763 { 00:11:26.763 "name": "BaseBdev1", 00:11:26.763 "aliases": [ 00:11:26.763 "5a317872-da03-4c9d-91a4-02f5b30a448a" 00:11:26.763 ], 00:11:26.763 "product_name": "Malloc disk", 00:11:26.763 "block_size": 512, 00:11:26.763 "num_blocks": 65536, 00:11:26.763 "uuid": "5a317872-da03-4c9d-91a4-02f5b30a448a", 00:11:26.763 "assigned_rate_limits": { 00:11:26.763 "rw_ios_per_sec": 0, 00:11:26.763 "rw_mbytes_per_sec": 0, 00:11:26.763 "r_mbytes_per_sec": 0, 00:11:26.763 "w_mbytes_per_sec": 0 00:11:26.763 }, 00:11:26.763 "claimed": true, 00:11:26.763 "claim_type": "exclusive_write", 00:11:26.763 "zoned": false, 00:11:26.763 "supported_io_types": { 00:11:26.763 "read": true, 00:11:26.763 "write": true, 00:11:26.763 "unmap": true, 00:11:26.763 "flush": true, 00:11:26.763 "reset": true, 00:11:26.763 "nvme_admin": false, 00:11:26.763 "nvme_io": false, 00:11:26.763 "nvme_io_md": false, 00:11:26.763 "write_zeroes": true, 00:11:26.763 "zcopy": true, 00:11:26.763 "get_zone_info": false, 00:11:26.763 "zone_management": false, 00:11:26.763 "zone_append": false, 00:11:26.763 "compare": false, 00:11:26.763 "compare_and_write": false, 00:11:26.763 "abort": true, 00:11:26.763 "seek_hole": false, 00:11:26.763 "seek_data": false, 00:11:26.763 "copy": true, 00:11:26.763 "nvme_iov_md": false 00:11:26.763 }, 00:11:26.763 "memory_domains": [ 00:11:26.763 { 00:11:26.763 "dma_device_id": "system", 00:11:26.763 "dma_device_type": 1 00:11:26.763 }, 00:11:26.763 { 00:11:26.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.763 "dma_device_type": 2 00:11:26.763 } 00:11:26.763 ], 00:11:26.763 "driver_specific": {} 00:11:26.763 } 00:11:26.763 ] 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.763 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.764 "name": "Existed_Raid", 00:11:26.764 "uuid": "de4c987e-06e8-4613-ba04-2b47638040cb", 00:11:26.764 "strip_size_kb": 64, 00:11:26.764 "state": "configuring", 00:11:26.764 "raid_level": "concat", 00:11:26.764 "superblock": true, 00:11:26.764 "num_base_bdevs": 4, 00:11:26.764 "num_base_bdevs_discovered": 1, 00:11:26.764 "num_base_bdevs_operational": 4, 00:11:26.764 "base_bdevs_list": [ 00:11:26.764 { 00:11:26.764 "name": "BaseBdev1", 00:11:26.764 "uuid": "5a317872-da03-4c9d-91a4-02f5b30a448a", 00:11:26.764 "is_configured": true, 00:11:26.764 "data_offset": 2048, 00:11:26.764 "data_size": 63488 00:11:26.764 }, 00:11:26.764 { 00:11:26.764 "name": "BaseBdev2", 00:11:26.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.764 "is_configured": false, 00:11:26.764 "data_offset": 0, 00:11:26.764 "data_size": 0 00:11:26.764 }, 00:11:26.764 { 00:11:26.764 "name": "BaseBdev3", 00:11:26.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.764 "is_configured": false, 00:11:26.764 "data_offset": 0, 00:11:26.764 "data_size": 0 00:11:26.764 }, 00:11:26.764 { 00:11:26.764 "name": "BaseBdev4", 00:11:26.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.764 "is_configured": false, 00:11:26.764 "data_offset": 0, 00:11:26.764 "data_size": 0 00:11:26.764 } 00:11:26.764 ] 00:11:26.764 }' 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.764 16:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.023 [2024-12-12 16:07:53.332151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.023 [2024-12-12 16:07:53.332304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.023 [2024-12-12 16:07:53.340189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.023 [2024-12-12 16:07:53.342244] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.023 [2024-12-12 16:07:53.342288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.023 [2024-12-12 16:07:53.342300] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.023 [2024-12-12 16:07:53.342312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.023 [2024-12-12 16:07:53.342320] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.023 [2024-12-12 16:07:53.342329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.023 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.281 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.281 "name": "Existed_Raid", 00:11:27.281 "uuid": "49f570ac-dd2e-4df7-af0b-6726009dd04c", 00:11:27.282 "strip_size_kb": 64, 00:11:27.282 "state": "configuring", 00:11:27.282 "raid_level": "concat", 00:11:27.282 "superblock": true, 00:11:27.282 "num_base_bdevs": 4, 00:11:27.282 "num_base_bdevs_discovered": 1, 00:11:27.282 "num_base_bdevs_operational": 4, 00:11:27.282 "base_bdevs_list": [ 00:11:27.282 { 00:11:27.282 "name": "BaseBdev1", 00:11:27.282 "uuid": "5a317872-da03-4c9d-91a4-02f5b30a448a", 00:11:27.282 "is_configured": true, 00:11:27.282 "data_offset": 2048, 00:11:27.282 "data_size": 63488 00:11:27.282 }, 00:11:27.282 { 00:11:27.282 "name": "BaseBdev2", 00:11:27.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.282 "is_configured": false, 00:11:27.282 "data_offset": 0, 00:11:27.282 "data_size": 0 00:11:27.282 }, 00:11:27.282 { 00:11:27.282 "name": "BaseBdev3", 00:11:27.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.282 "is_configured": false, 00:11:27.282 "data_offset": 0, 00:11:27.282 "data_size": 0 00:11:27.282 }, 00:11:27.282 { 00:11:27.282 "name": "BaseBdev4", 00:11:27.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.282 "is_configured": false, 00:11:27.282 "data_offset": 0, 00:11:27.282 "data_size": 0 00:11:27.282 } 00:11:27.282 ] 00:11:27.282 }' 00:11:27.282 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.282 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.540 [2024-12-12 16:07:53.834584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.540 BaseBdev2 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.540 [ 00:11:27.540 { 00:11:27.540 "name": "BaseBdev2", 00:11:27.540 "aliases": [ 00:11:27.540 "e34ce62c-b889-4a1c-bee2-7c8a7d25f359" 00:11:27.540 ], 00:11:27.540 "product_name": "Malloc disk", 00:11:27.540 "block_size": 512, 00:11:27.540 "num_blocks": 65536, 00:11:27.540 "uuid": "e34ce62c-b889-4a1c-bee2-7c8a7d25f359", 00:11:27.540 "assigned_rate_limits": { 00:11:27.540 "rw_ios_per_sec": 0, 00:11:27.540 "rw_mbytes_per_sec": 0, 00:11:27.540 "r_mbytes_per_sec": 0, 00:11:27.540 "w_mbytes_per_sec": 0 00:11:27.540 }, 00:11:27.540 "claimed": true, 00:11:27.540 "claim_type": "exclusive_write", 00:11:27.540 "zoned": false, 00:11:27.540 "supported_io_types": { 00:11:27.540 "read": true, 00:11:27.540 "write": true, 00:11:27.540 "unmap": true, 00:11:27.540 "flush": true, 00:11:27.540 "reset": true, 00:11:27.540 "nvme_admin": false, 00:11:27.540 "nvme_io": false, 00:11:27.540 "nvme_io_md": false, 00:11:27.540 "write_zeroes": true, 00:11:27.540 "zcopy": true, 00:11:27.540 "get_zone_info": false, 00:11:27.540 "zone_management": false, 00:11:27.540 "zone_append": false, 00:11:27.540 "compare": false, 00:11:27.540 "compare_and_write": false, 00:11:27.540 "abort": true, 00:11:27.540 "seek_hole": false, 00:11:27.540 "seek_data": false, 00:11:27.540 "copy": true, 00:11:27.540 "nvme_iov_md": false 00:11:27.540 }, 00:11:27.540 "memory_domains": [ 00:11:27.540 { 00:11:27.540 "dma_device_id": "system", 00:11:27.540 "dma_device_type": 1 00:11:27.540 }, 00:11:27.540 { 00:11:27.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.540 "dma_device_type": 2 00:11:27.540 } 00:11:27.540 ], 00:11:27.540 "driver_specific": {} 00:11:27.540 } 00:11:27.540 ] 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.540 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.541 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.799 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.799 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.799 "name": "Existed_Raid", 00:11:27.799 "uuid": "49f570ac-dd2e-4df7-af0b-6726009dd04c", 00:11:27.799 "strip_size_kb": 64, 00:11:27.799 "state": "configuring", 00:11:27.799 "raid_level": "concat", 00:11:27.799 "superblock": true, 00:11:27.799 "num_base_bdevs": 4, 00:11:27.799 "num_base_bdevs_discovered": 2, 00:11:27.799 "num_base_bdevs_operational": 4, 00:11:27.799 "base_bdevs_list": [ 00:11:27.799 { 00:11:27.799 "name": "BaseBdev1", 00:11:27.799 "uuid": "5a317872-da03-4c9d-91a4-02f5b30a448a", 00:11:27.799 "is_configured": true, 00:11:27.799 "data_offset": 2048, 00:11:27.799 "data_size": 63488 00:11:27.799 }, 00:11:27.799 { 00:11:27.799 "name": "BaseBdev2", 00:11:27.799 "uuid": "e34ce62c-b889-4a1c-bee2-7c8a7d25f359", 00:11:27.799 "is_configured": true, 00:11:27.799 "data_offset": 2048, 00:11:27.799 "data_size": 63488 00:11:27.799 }, 00:11:27.799 { 00:11:27.799 "name": "BaseBdev3", 00:11:27.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.799 "is_configured": false, 00:11:27.799 "data_offset": 0, 00:11:27.799 "data_size": 0 00:11:27.799 }, 00:11:27.799 { 00:11:27.799 "name": "BaseBdev4", 00:11:27.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.799 "is_configured": false, 00:11:27.799 "data_offset": 0, 00:11:27.799 "data_size": 0 00:11:27.799 } 00:11:27.799 ] 00:11:27.799 }' 00:11:27.799 16:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.799 16:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.057 [2024-12-12 16:07:54.387785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.057 BaseBdev3 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.057 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.315 [ 00:11:28.315 { 00:11:28.315 "name": "BaseBdev3", 00:11:28.315 "aliases": [ 00:11:28.315 "6b0ed1e8-85cd-4cf4-a000-01c222ad5f82" 00:11:28.315 ], 00:11:28.315 "product_name": "Malloc disk", 00:11:28.315 "block_size": 512, 00:11:28.315 "num_blocks": 65536, 00:11:28.315 "uuid": "6b0ed1e8-85cd-4cf4-a000-01c222ad5f82", 00:11:28.315 "assigned_rate_limits": { 00:11:28.315 "rw_ios_per_sec": 0, 00:11:28.315 "rw_mbytes_per_sec": 0, 00:11:28.315 "r_mbytes_per_sec": 0, 00:11:28.315 "w_mbytes_per_sec": 0 00:11:28.315 }, 00:11:28.315 "claimed": true, 00:11:28.315 "claim_type": "exclusive_write", 00:11:28.315 "zoned": false, 00:11:28.315 "supported_io_types": { 00:11:28.315 "read": true, 00:11:28.315 "write": true, 00:11:28.315 "unmap": true, 00:11:28.315 "flush": true, 00:11:28.315 "reset": true, 00:11:28.315 "nvme_admin": false, 00:11:28.315 "nvme_io": false, 00:11:28.315 "nvme_io_md": false, 00:11:28.315 "write_zeroes": true, 00:11:28.315 "zcopy": true, 00:11:28.315 "get_zone_info": false, 00:11:28.315 "zone_management": false, 00:11:28.315 "zone_append": false, 00:11:28.315 "compare": false, 00:11:28.315 "compare_and_write": false, 00:11:28.315 "abort": true, 00:11:28.315 "seek_hole": false, 00:11:28.315 "seek_data": false, 00:11:28.315 "copy": true, 00:11:28.315 "nvme_iov_md": false 00:11:28.315 }, 00:11:28.315 "memory_domains": [ 00:11:28.315 { 00:11:28.315 "dma_device_id": "system", 00:11:28.315 "dma_device_type": 1 00:11:28.315 }, 00:11:28.315 { 00:11:28.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.315 "dma_device_type": 2 00:11:28.315 } 00:11:28.315 ], 00:11:28.315 "driver_specific": {} 00:11:28.315 } 00:11:28.315 ] 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.315 "name": "Existed_Raid", 00:11:28.315 "uuid": "49f570ac-dd2e-4df7-af0b-6726009dd04c", 00:11:28.315 "strip_size_kb": 64, 00:11:28.315 "state": "configuring", 00:11:28.315 "raid_level": "concat", 00:11:28.315 "superblock": true, 00:11:28.315 "num_base_bdevs": 4, 00:11:28.315 "num_base_bdevs_discovered": 3, 00:11:28.315 "num_base_bdevs_operational": 4, 00:11:28.315 "base_bdevs_list": [ 00:11:28.315 { 00:11:28.315 "name": "BaseBdev1", 00:11:28.315 "uuid": "5a317872-da03-4c9d-91a4-02f5b30a448a", 00:11:28.315 "is_configured": true, 00:11:28.315 "data_offset": 2048, 00:11:28.315 "data_size": 63488 00:11:28.315 }, 00:11:28.315 { 00:11:28.315 "name": "BaseBdev2", 00:11:28.315 "uuid": "e34ce62c-b889-4a1c-bee2-7c8a7d25f359", 00:11:28.315 "is_configured": true, 00:11:28.315 "data_offset": 2048, 00:11:28.315 "data_size": 63488 00:11:28.315 }, 00:11:28.315 { 00:11:28.315 "name": "BaseBdev3", 00:11:28.315 "uuid": "6b0ed1e8-85cd-4cf4-a000-01c222ad5f82", 00:11:28.315 "is_configured": true, 00:11:28.315 "data_offset": 2048, 00:11:28.315 "data_size": 63488 00:11:28.315 }, 00:11:28.315 { 00:11:28.315 "name": "BaseBdev4", 00:11:28.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.315 "is_configured": false, 00:11:28.315 "data_offset": 0, 00:11:28.315 "data_size": 0 00:11:28.315 } 00:11:28.315 ] 00:11:28.315 }' 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.315 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.574 [2024-12-12 16:07:54.919027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:28.574 [2024-12-12 16:07:54.919328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.574 [2024-12-12 16:07:54.919345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:28.574 BaseBdev4 00:11:28.574 [2024-12-12 16:07:54.919657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:28.574 [2024-12-12 16:07:54.919831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.574 [2024-12-12 16:07:54.919844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:28.574 [2024-12-12 16:07:54.920035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.574 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.832 [ 00:11:28.832 { 00:11:28.832 "name": "BaseBdev4", 00:11:28.832 "aliases": [ 00:11:28.832 "376ef90c-e437-4e5b-a849-840bf988c46e" 00:11:28.832 ], 00:11:28.832 "product_name": "Malloc disk", 00:11:28.832 "block_size": 512, 00:11:28.832 "num_blocks": 65536, 00:11:28.832 "uuid": "376ef90c-e437-4e5b-a849-840bf988c46e", 00:11:28.832 "assigned_rate_limits": { 00:11:28.832 "rw_ios_per_sec": 0, 00:11:28.832 "rw_mbytes_per_sec": 0, 00:11:28.832 "r_mbytes_per_sec": 0, 00:11:28.832 "w_mbytes_per_sec": 0 00:11:28.832 }, 00:11:28.832 "claimed": true, 00:11:28.832 "claim_type": "exclusive_write", 00:11:28.832 "zoned": false, 00:11:28.832 "supported_io_types": { 00:11:28.832 "read": true, 00:11:28.832 "write": true, 00:11:28.832 "unmap": true, 00:11:28.832 "flush": true, 00:11:28.832 "reset": true, 00:11:28.832 "nvme_admin": false, 00:11:28.832 "nvme_io": false, 00:11:28.832 "nvme_io_md": false, 00:11:28.832 "write_zeroes": true, 00:11:28.832 "zcopy": true, 00:11:28.832 "get_zone_info": false, 00:11:28.832 "zone_management": false, 00:11:28.832 "zone_append": false, 00:11:28.832 "compare": false, 00:11:28.832 "compare_and_write": false, 00:11:28.832 "abort": true, 00:11:28.832 "seek_hole": false, 00:11:28.832 "seek_data": false, 00:11:28.832 "copy": true, 00:11:28.832 "nvme_iov_md": false 00:11:28.832 }, 00:11:28.832 "memory_domains": [ 00:11:28.832 { 00:11:28.832 "dma_device_id": "system", 00:11:28.832 "dma_device_type": 1 00:11:28.832 }, 00:11:28.832 { 00:11:28.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.832 "dma_device_type": 2 00:11:28.832 } 00:11:28.832 ], 00:11:28.832 "driver_specific": {} 00:11:28.832 } 00:11:28.832 ] 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.832 16:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.832 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.832 "name": "Existed_Raid", 00:11:28.832 "uuid": "49f570ac-dd2e-4df7-af0b-6726009dd04c", 00:11:28.832 "strip_size_kb": 64, 00:11:28.832 "state": "online", 00:11:28.832 "raid_level": "concat", 00:11:28.832 "superblock": true, 00:11:28.832 "num_base_bdevs": 4, 00:11:28.832 "num_base_bdevs_discovered": 4, 00:11:28.832 "num_base_bdevs_operational": 4, 00:11:28.832 "base_bdevs_list": [ 00:11:28.832 { 00:11:28.832 "name": "BaseBdev1", 00:11:28.832 "uuid": "5a317872-da03-4c9d-91a4-02f5b30a448a", 00:11:28.832 "is_configured": true, 00:11:28.832 "data_offset": 2048, 00:11:28.832 "data_size": 63488 00:11:28.832 }, 00:11:28.832 { 00:11:28.832 "name": "BaseBdev2", 00:11:28.832 "uuid": "e34ce62c-b889-4a1c-bee2-7c8a7d25f359", 00:11:28.832 "is_configured": true, 00:11:28.832 "data_offset": 2048, 00:11:28.832 "data_size": 63488 00:11:28.832 }, 00:11:28.832 { 00:11:28.832 "name": "BaseBdev3", 00:11:28.832 "uuid": "6b0ed1e8-85cd-4cf4-a000-01c222ad5f82", 00:11:28.832 "is_configured": true, 00:11:28.832 "data_offset": 2048, 00:11:28.832 "data_size": 63488 00:11:28.832 }, 00:11:28.832 { 00:11:28.832 "name": "BaseBdev4", 00:11:28.832 "uuid": "376ef90c-e437-4e5b-a849-840bf988c46e", 00:11:28.832 "is_configured": true, 00:11:28.832 "data_offset": 2048, 00:11:28.832 "data_size": 63488 00:11:28.832 } 00:11:28.832 ] 00:11:28.832 }' 00:11:28.833 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.833 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.089 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.089 [2024-12-12 16:07:55.434695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.347 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.347 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.347 "name": "Existed_Raid", 00:11:29.347 "aliases": [ 00:11:29.347 "49f570ac-dd2e-4df7-af0b-6726009dd04c" 00:11:29.347 ], 00:11:29.347 "product_name": "Raid Volume", 00:11:29.347 "block_size": 512, 00:11:29.347 "num_blocks": 253952, 00:11:29.347 "uuid": "49f570ac-dd2e-4df7-af0b-6726009dd04c", 00:11:29.347 "assigned_rate_limits": { 00:11:29.347 "rw_ios_per_sec": 0, 00:11:29.347 "rw_mbytes_per_sec": 0, 00:11:29.347 "r_mbytes_per_sec": 0, 00:11:29.347 "w_mbytes_per_sec": 0 00:11:29.347 }, 00:11:29.347 "claimed": false, 00:11:29.347 "zoned": false, 00:11:29.347 "supported_io_types": { 00:11:29.347 "read": true, 00:11:29.347 "write": true, 00:11:29.347 "unmap": true, 00:11:29.347 "flush": true, 00:11:29.347 "reset": true, 00:11:29.347 "nvme_admin": false, 00:11:29.347 "nvme_io": false, 00:11:29.347 "nvme_io_md": false, 00:11:29.347 "write_zeroes": true, 00:11:29.347 "zcopy": false, 00:11:29.347 "get_zone_info": false, 00:11:29.347 "zone_management": false, 00:11:29.347 "zone_append": false, 00:11:29.347 "compare": false, 00:11:29.347 "compare_and_write": false, 00:11:29.347 "abort": false, 00:11:29.347 "seek_hole": false, 00:11:29.347 "seek_data": false, 00:11:29.347 "copy": false, 00:11:29.347 "nvme_iov_md": false 00:11:29.347 }, 00:11:29.347 "memory_domains": [ 00:11:29.347 { 00:11:29.347 "dma_device_id": "system", 00:11:29.347 "dma_device_type": 1 00:11:29.347 }, 00:11:29.347 { 00:11:29.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.347 "dma_device_type": 2 00:11:29.347 }, 00:11:29.347 { 00:11:29.347 "dma_device_id": "system", 00:11:29.347 "dma_device_type": 1 00:11:29.347 }, 00:11:29.347 { 00:11:29.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.347 "dma_device_type": 2 00:11:29.347 }, 00:11:29.347 { 00:11:29.347 "dma_device_id": "system", 00:11:29.347 "dma_device_type": 1 00:11:29.347 }, 00:11:29.348 { 00:11:29.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.348 "dma_device_type": 2 00:11:29.348 }, 00:11:29.348 { 00:11:29.348 "dma_device_id": "system", 00:11:29.348 "dma_device_type": 1 00:11:29.348 }, 00:11:29.348 { 00:11:29.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.348 "dma_device_type": 2 00:11:29.348 } 00:11:29.348 ], 00:11:29.348 "driver_specific": { 00:11:29.348 "raid": { 00:11:29.348 "uuid": "49f570ac-dd2e-4df7-af0b-6726009dd04c", 00:11:29.348 "strip_size_kb": 64, 00:11:29.348 "state": "online", 00:11:29.348 "raid_level": "concat", 00:11:29.348 "superblock": true, 00:11:29.348 "num_base_bdevs": 4, 00:11:29.348 "num_base_bdevs_discovered": 4, 00:11:29.348 "num_base_bdevs_operational": 4, 00:11:29.348 "base_bdevs_list": [ 00:11:29.348 { 00:11:29.348 "name": "BaseBdev1", 00:11:29.348 "uuid": "5a317872-da03-4c9d-91a4-02f5b30a448a", 00:11:29.348 "is_configured": true, 00:11:29.348 "data_offset": 2048, 00:11:29.348 "data_size": 63488 00:11:29.348 }, 00:11:29.348 { 00:11:29.348 "name": "BaseBdev2", 00:11:29.348 "uuid": "e34ce62c-b889-4a1c-bee2-7c8a7d25f359", 00:11:29.348 "is_configured": true, 00:11:29.348 "data_offset": 2048, 00:11:29.348 "data_size": 63488 00:11:29.348 }, 00:11:29.348 { 00:11:29.348 "name": "BaseBdev3", 00:11:29.348 "uuid": "6b0ed1e8-85cd-4cf4-a000-01c222ad5f82", 00:11:29.348 "is_configured": true, 00:11:29.348 "data_offset": 2048, 00:11:29.348 "data_size": 63488 00:11:29.348 }, 00:11:29.348 { 00:11:29.348 "name": "BaseBdev4", 00:11:29.348 "uuid": "376ef90c-e437-4e5b-a849-840bf988c46e", 00:11:29.348 "is_configured": true, 00:11:29.348 "data_offset": 2048, 00:11:29.348 "data_size": 63488 00:11:29.348 } 00:11:29.348 ] 00:11:29.348 } 00:11:29.348 } 00:11:29.348 }' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:29.348 BaseBdev2 00:11:29.348 BaseBdev3 00:11:29.348 BaseBdev4' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.348 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.348 [2024-12-12 16:07:55.693957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.348 [2024-12-12 16:07:55.694043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.348 [2024-12-12 16:07:55.694131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.606 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.606 "name": "Existed_Raid", 00:11:29.606 "uuid": "49f570ac-dd2e-4df7-af0b-6726009dd04c", 00:11:29.606 "strip_size_kb": 64, 00:11:29.606 "state": "offline", 00:11:29.606 "raid_level": "concat", 00:11:29.606 "superblock": true, 00:11:29.606 "num_base_bdevs": 4, 00:11:29.606 "num_base_bdevs_discovered": 3, 00:11:29.606 "num_base_bdevs_operational": 3, 00:11:29.606 "base_bdevs_list": [ 00:11:29.606 { 00:11:29.606 "name": null, 00:11:29.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.606 "is_configured": false, 00:11:29.606 "data_offset": 0, 00:11:29.606 "data_size": 63488 00:11:29.606 }, 00:11:29.606 { 00:11:29.606 "name": "BaseBdev2", 00:11:29.606 "uuid": "e34ce62c-b889-4a1c-bee2-7c8a7d25f359", 00:11:29.606 "is_configured": true, 00:11:29.606 "data_offset": 2048, 00:11:29.607 "data_size": 63488 00:11:29.607 }, 00:11:29.607 { 00:11:29.607 "name": "BaseBdev3", 00:11:29.607 "uuid": "6b0ed1e8-85cd-4cf4-a000-01c222ad5f82", 00:11:29.607 "is_configured": true, 00:11:29.607 "data_offset": 2048, 00:11:29.607 "data_size": 63488 00:11:29.607 }, 00:11:29.607 { 00:11:29.607 "name": "BaseBdev4", 00:11:29.607 "uuid": "376ef90c-e437-4e5b-a849-840bf988c46e", 00:11:29.607 "is_configured": true, 00:11:29.607 "data_offset": 2048, 00:11:29.607 "data_size": 63488 00:11:29.607 } 00:11:29.607 ] 00:11:29.607 }' 00:11:29.607 16:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.607 16:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 [2024-12-12 16:07:56.302051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.173 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 [2024-12-12 16:07:56.476398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.431 [2024-12-12 16:07:56.644887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:30.431 [2024-12-12 16:07:56.644957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.431 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.735 BaseBdev2 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.735 [ 00:11:30.735 { 00:11:30.735 "name": "BaseBdev2", 00:11:30.735 "aliases": [ 00:11:30.735 "96ed9f53-f96c-44ba-8cce-15d078db6d73" 00:11:30.735 ], 00:11:30.735 "product_name": "Malloc disk", 00:11:30.735 "block_size": 512, 00:11:30.735 "num_blocks": 65536, 00:11:30.735 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:30.735 "assigned_rate_limits": { 00:11:30.735 "rw_ios_per_sec": 0, 00:11:30.735 "rw_mbytes_per_sec": 0, 00:11:30.735 "r_mbytes_per_sec": 0, 00:11:30.735 "w_mbytes_per_sec": 0 00:11:30.735 }, 00:11:30.735 "claimed": false, 00:11:30.735 "zoned": false, 00:11:30.735 "supported_io_types": { 00:11:30.735 "read": true, 00:11:30.735 "write": true, 00:11:30.735 "unmap": true, 00:11:30.735 "flush": true, 00:11:30.735 "reset": true, 00:11:30.735 "nvme_admin": false, 00:11:30.735 "nvme_io": false, 00:11:30.735 "nvme_io_md": false, 00:11:30.735 "write_zeroes": true, 00:11:30.735 "zcopy": true, 00:11:30.735 "get_zone_info": false, 00:11:30.735 "zone_management": false, 00:11:30.735 "zone_append": false, 00:11:30.735 "compare": false, 00:11:30.735 "compare_and_write": false, 00:11:30.735 "abort": true, 00:11:30.735 "seek_hole": false, 00:11:30.735 "seek_data": false, 00:11:30.735 "copy": true, 00:11:30.735 "nvme_iov_md": false 00:11:30.735 }, 00:11:30.735 "memory_domains": [ 00:11:30.735 { 00:11:30.735 "dma_device_id": "system", 00:11:30.735 "dma_device_type": 1 00:11:30.735 }, 00:11:30.735 { 00:11:30.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.735 "dma_device_type": 2 00:11:30.735 } 00:11:30.735 ], 00:11:30.735 "driver_specific": {} 00:11:30.735 } 00:11:30.735 ] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.735 BaseBdev3 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.735 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.735 [ 00:11:30.735 { 00:11:30.735 "name": "BaseBdev3", 00:11:30.735 "aliases": [ 00:11:30.735 "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2" 00:11:30.735 ], 00:11:30.735 "product_name": "Malloc disk", 00:11:30.735 "block_size": 512, 00:11:30.735 "num_blocks": 65536, 00:11:30.735 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:30.735 "assigned_rate_limits": { 00:11:30.735 "rw_ios_per_sec": 0, 00:11:30.735 "rw_mbytes_per_sec": 0, 00:11:30.735 "r_mbytes_per_sec": 0, 00:11:30.735 "w_mbytes_per_sec": 0 00:11:30.735 }, 00:11:30.735 "claimed": false, 00:11:30.735 "zoned": false, 00:11:30.735 "supported_io_types": { 00:11:30.735 "read": true, 00:11:30.735 "write": true, 00:11:30.735 "unmap": true, 00:11:30.735 "flush": true, 00:11:30.735 "reset": true, 00:11:30.735 "nvme_admin": false, 00:11:30.735 "nvme_io": false, 00:11:30.735 "nvme_io_md": false, 00:11:30.735 "write_zeroes": true, 00:11:30.735 "zcopy": true, 00:11:30.735 "get_zone_info": false, 00:11:30.735 "zone_management": false, 00:11:30.735 "zone_append": false, 00:11:30.735 "compare": false, 00:11:30.735 "compare_and_write": false, 00:11:30.736 "abort": true, 00:11:30.736 "seek_hole": false, 00:11:30.736 "seek_data": false, 00:11:30.736 "copy": true, 00:11:30.736 "nvme_iov_md": false 00:11:30.736 }, 00:11:30.736 "memory_domains": [ 00:11:30.736 { 00:11:30.736 "dma_device_id": "system", 00:11:30.736 "dma_device_type": 1 00:11:30.736 }, 00:11:30.736 { 00:11:30.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.736 "dma_device_type": 2 00:11:30.736 } 00:11:30.736 ], 00:11:30.736 "driver_specific": {} 00:11:30.736 } 00:11:30.736 ] 00:11:30.736 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.736 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.736 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.736 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.736 16:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:30.736 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.736 16:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.736 BaseBdev4 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.736 [ 00:11:30.736 { 00:11:30.736 "name": "BaseBdev4", 00:11:30.736 "aliases": [ 00:11:30.736 "913e4a1f-9b29-42de-8cab-3c763fe922ee" 00:11:30.736 ], 00:11:30.736 "product_name": "Malloc disk", 00:11:30.736 "block_size": 512, 00:11:30.736 "num_blocks": 65536, 00:11:30.736 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:30.736 "assigned_rate_limits": { 00:11:30.736 "rw_ios_per_sec": 0, 00:11:30.736 "rw_mbytes_per_sec": 0, 00:11:30.736 "r_mbytes_per_sec": 0, 00:11:30.736 "w_mbytes_per_sec": 0 00:11:30.736 }, 00:11:30.736 "claimed": false, 00:11:30.736 "zoned": false, 00:11:30.736 "supported_io_types": { 00:11:30.736 "read": true, 00:11:30.736 "write": true, 00:11:30.736 "unmap": true, 00:11:30.736 "flush": true, 00:11:30.736 "reset": true, 00:11:30.736 "nvme_admin": false, 00:11:30.736 "nvme_io": false, 00:11:30.736 "nvme_io_md": false, 00:11:30.736 "write_zeroes": true, 00:11:30.736 "zcopy": true, 00:11:30.736 "get_zone_info": false, 00:11:30.736 "zone_management": false, 00:11:30.736 "zone_append": false, 00:11:30.736 "compare": false, 00:11:30.736 "compare_and_write": false, 00:11:30.736 "abort": true, 00:11:30.736 "seek_hole": false, 00:11:30.736 "seek_data": false, 00:11:30.736 "copy": true, 00:11:30.736 "nvme_iov_md": false 00:11:30.736 }, 00:11:30.736 "memory_domains": [ 00:11:30.736 { 00:11:30.736 "dma_device_id": "system", 00:11:30.736 "dma_device_type": 1 00:11:30.736 }, 00:11:30.736 { 00:11:30.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.736 "dma_device_type": 2 00:11:30.736 } 00:11:30.736 ], 00:11:30.736 "driver_specific": {} 00:11:30.736 } 00:11:30.736 ] 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.736 [2024-12-12 16:07:57.050832] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.736 [2024-12-12 16:07:57.050968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.736 [2024-12-12 16:07:57.051031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.736 [2024-12-12 16:07:57.053250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.736 [2024-12-12 16:07:57.053355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.736 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.021 "name": "Existed_Raid", 00:11:31.021 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:31.021 "strip_size_kb": 64, 00:11:31.021 "state": "configuring", 00:11:31.021 "raid_level": "concat", 00:11:31.021 "superblock": true, 00:11:31.021 "num_base_bdevs": 4, 00:11:31.021 "num_base_bdevs_discovered": 3, 00:11:31.021 "num_base_bdevs_operational": 4, 00:11:31.021 "base_bdevs_list": [ 00:11:31.021 { 00:11:31.021 "name": "BaseBdev1", 00:11:31.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.021 "is_configured": false, 00:11:31.021 "data_offset": 0, 00:11:31.021 "data_size": 0 00:11:31.021 }, 00:11:31.021 { 00:11:31.021 "name": "BaseBdev2", 00:11:31.021 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:31.021 "is_configured": true, 00:11:31.021 "data_offset": 2048, 00:11:31.021 "data_size": 63488 00:11:31.021 }, 00:11:31.021 { 00:11:31.021 "name": "BaseBdev3", 00:11:31.021 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:31.021 "is_configured": true, 00:11:31.021 "data_offset": 2048, 00:11:31.021 "data_size": 63488 00:11:31.021 }, 00:11:31.021 { 00:11:31.021 "name": "BaseBdev4", 00:11:31.021 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:31.021 "is_configured": true, 00:11:31.021 "data_offset": 2048, 00:11:31.021 "data_size": 63488 00:11:31.021 } 00:11:31.021 ] 00:11:31.021 }' 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.021 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.280 [2024-12-12 16:07:57.550113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.280 "name": "Existed_Raid", 00:11:31.280 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:31.280 "strip_size_kb": 64, 00:11:31.280 "state": "configuring", 00:11:31.280 "raid_level": "concat", 00:11:31.280 "superblock": true, 00:11:31.280 "num_base_bdevs": 4, 00:11:31.280 "num_base_bdevs_discovered": 2, 00:11:31.280 "num_base_bdevs_operational": 4, 00:11:31.280 "base_bdevs_list": [ 00:11:31.280 { 00:11:31.280 "name": "BaseBdev1", 00:11:31.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.280 "is_configured": false, 00:11:31.280 "data_offset": 0, 00:11:31.280 "data_size": 0 00:11:31.280 }, 00:11:31.280 { 00:11:31.280 "name": null, 00:11:31.280 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:31.280 "is_configured": false, 00:11:31.280 "data_offset": 0, 00:11:31.280 "data_size": 63488 00:11:31.280 }, 00:11:31.280 { 00:11:31.280 "name": "BaseBdev3", 00:11:31.280 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:31.280 "is_configured": true, 00:11:31.280 "data_offset": 2048, 00:11:31.280 "data_size": 63488 00:11:31.280 }, 00:11:31.280 { 00:11:31.280 "name": "BaseBdev4", 00:11:31.280 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:31.280 "is_configured": true, 00:11:31.280 "data_offset": 2048, 00:11:31.280 "data_size": 63488 00:11:31.280 } 00:11:31.280 ] 00:11:31.280 }' 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.280 16:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.848 [2024-12-12 16:07:58.123715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.848 BaseBdev1 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.848 [ 00:11:31.848 { 00:11:31.848 "name": "BaseBdev1", 00:11:31.848 "aliases": [ 00:11:31.848 "577067f7-00a9-4201-a3f0-c52f76e232d2" 00:11:31.848 ], 00:11:31.848 "product_name": "Malloc disk", 00:11:31.848 "block_size": 512, 00:11:31.848 "num_blocks": 65536, 00:11:31.848 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:31.848 "assigned_rate_limits": { 00:11:31.848 "rw_ios_per_sec": 0, 00:11:31.848 "rw_mbytes_per_sec": 0, 00:11:31.848 "r_mbytes_per_sec": 0, 00:11:31.848 "w_mbytes_per_sec": 0 00:11:31.848 }, 00:11:31.848 "claimed": true, 00:11:31.848 "claim_type": "exclusive_write", 00:11:31.848 "zoned": false, 00:11:31.848 "supported_io_types": { 00:11:31.848 "read": true, 00:11:31.848 "write": true, 00:11:31.848 "unmap": true, 00:11:31.848 "flush": true, 00:11:31.848 "reset": true, 00:11:31.848 "nvme_admin": false, 00:11:31.848 "nvme_io": false, 00:11:31.848 "nvme_io_md": false, 00:11:31.848 "write_zeroes": true, 00:11:31.848 "zcopy": true, 00:11:31.848 "get_zone_info": false, 00:11:31.848 "zone_management": false, 00:11:31.848 "zone_append": false, 00:11:31.848 "compare": false, 00:11:31.848 "compare_and_write": false, 00:11:31.848 "abort": true, 00:11:31.848 "seek_hole": false, 00:11:31.848 "seek_data": false, 00:11:31.848 "copy": true, 00:11:31.848 "nvme_iov_md": false 00:11:31.848 }, 00:11:31.848 "memory_domains": [ 00:11:31.848 { 00:11:31.848 "dma_device_id": "system", 00:11:31.848 "dma_device_type": 1 00:11:31.848 }, 00:11:31.848 { 00:11:31.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.848 "dma_device_type": 2 00:11:31.848 } 00:11:31.848 ], 00:11:31.848 "driver_specific": {} 00:11:31.848 } 00:11:31.848 ] 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.848 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.107 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.107 "name": "Existed_Raid", 00:11:32.107 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:32.107 "strip_size_kb": 64, 00:11:32.107 "state": "configuring", 00:11:32.107 "raid_level": "concat", 00:11:32.107 "superblock": true, 00:11:32.107 "num_base_bdevs": 4, 00:11:32.107 "num_base_bdevs_discovered": 3, 00:11:32.107 "num_base_bdevs_operational": 4, 00:11:32.107 "base_bdevs_list": [ 00:11:32.107 { 00:11:32.107 "name": "BaseBdev1", 00:11:32.107 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:32.107 "is_configured": true, 00:11:32.107 "data_offset": 2048, 00:11:32.107 "data_size": 63488 00:11:32.107 }, 00:11:32.107 { 00:11:32.107 "name": null, 00:11:32.107 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:32.107 "is_configured": false, 00:11:32.107 "data_offset": 0, 00:11:32.107 "data_size": 63488 00:11:32.107 }, 00:11:32.107 { 00:11:32.107 "name": "BaseBdev3", 00:11:32.107 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:32.107 "is_configured": true, 00:11:32.107 "data_offset": 2048, 00:11:32.107 "data_size": 63488 00:11:32.107 }, 00:11:32.107 { 00:11:32.107 "name": "BaseBdev4", 00:11:32.107 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:32.107 "is_configured": true, 00:11:32.107 "data_offset": 2048, 00:11:32.107 "data_size": 63488 00:11:32.107 } 00:11:32.107 ] 00:11:32.107 }' 00:11:32.107 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.107 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.366 [2024-12-12 16:07:58.651000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.366 "name": "Existed_Raid", 00:11:32.366 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:32.366 "strip_size_kb": 64, 00:11:32.366 "state": "configuring", 00:11:32.366 "raid_level": "concat", 00:11:32.366 "superblock": true, 00:11:32.366 "num_base_bdevs": 4, 00:11:32.366 "num_base_bdevs_discovered": 2, 00:11:32.366 "num_base_bdevs_operational": 4, 00:11:32.366 "base_bdevs_list": [ 00:11:32.366 { 00:11:32.366 "name": "BaseBdev1", 00:11:32.366 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:32.366 "is_configured": true, 00:11:32.366 "data_offset": 2048, 00:11:32.366 "data_size": 63488 00:11:32.366 }, 00:11:32.366 { 00:11:32.366 "name": null, 00:11:32.366 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:32.366 "is_configured": false, 00:11:32.366 "data_offset": 0, 00:11:32.366 "data_size": 63488 00:11:32.366 }, 00:11:32.366 { 00:11:32.366 "name": null, 00:11:32.366 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:32.366 "is_configured": false, 00:11:32.366 "data_offset": 0, 00:11:32.366 "data_size": 63488 00:11:32.366 }, 00:11:32.366 { 00:11:32.366 "name": "BaseBdev4", 00:11:32.366 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:32.366 "is_configured": true, 00:11:32.366 "data_offset": 2048, 00:11:32.366 "data_size": 63488 00:11:32.366 } 00:11:32.366 ] 00:11:32.366 }' 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.366 16:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.933 [2024-12-12 16:07:59.126114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.933 "name": "Existed_Raid", 00:11:32.933 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:32.933 "strip_size_kb": 64, 00:11:32.933 "state": "configuring", 00:11:32.933 "raid_level": "concat", 00:11:32.933 "superblock": true, 00:11:32.933 "num_base_bdevs": 4, 00:11:32.933 "num_base_bdevs_discovered": 3, 00:11:32.933 "num_base_bdevs_operational": 4, 00:11:32.933 "base_bdevs_list": [ 00:11:32.933 { 00:11:32.933 "name": "BaseBdev1", 00:11:32.933 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:32.933 "is_configured": true, 00:11:32.933 "data_offset": 2048, 00:11:32.933 "data_size": 63488 00:11:32.933 }, 00:11:32.933 { 00:11:32.933 "name": null, 00:11:32.933 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:32.933 "is_configured": false, 00:11:32.933 "data_offset": 0, 00:11:32.933 "data_size": 63488 00:11:32.933 }, 00:11:32.933 { 00:11:32.933 "name": "BaseBdev3", 00:11:32.933 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:32.933 "is_configured": true, 00:11:32.933 "data_offset": 2048, 00:11:32.933 "data_size": 63488 00:11:32.933 }, 00:11:32.933 { 00:11:32.933 "name": "BaseBdev4", 00:11:32.933 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:32.933 "is_configured": true, 00:11:32.933 "data_offset": 2048, 00:11:32.933 "data_size": 63488 00:11:32.933 } 00:11:32.933 ] 00:11:32.933 }' 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.933 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.502 [2024-12-12 16:07:59.641416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.502 "name": "Existed_Raid", 00:11:33.502 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:33.502 "strip_size_kb": 64, 00:11:33.502 "state": "configuring", 00:11:33.502 "raid_level": "concat", 00:11:33.502 "superblock": true, 00:11:33.502 "num_base_bdevs": 4, 00:11:33.502 "num_base_bdevs_discovered": 2, 00:11:33.502 "num_base_bdevs_operational": 4, 00:11:33.502 "base_bdevs_list": [ 00:11:33.502 { 00:11:33.502 "name": null, 00:11:33.502 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:33.502 "is_configured": false, 00:11:33.502 "data_offset": 0, 00:11:33.502 "data_size": 63488 00:11:33.502 }, 00:11:33.502 { 00:11:33.502 "name": null, 00:11:33.502 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:33.502 "is_configured": false, 00:11:33.502 "data_offset": 0, 00:11:33.502 "data_size": 63488 00:11:33.502 }, 00:11:33.502 { 00:11:33.502 "name": "BaseBdev3", 00:11:33.502 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:33.502 "is_configured": true, 00:11:33.502 "data_offset": 2048, 00:11:33.502 "data_size": 63488 00:11:33.502 }, 00:11:33.502 { 00:11:33.502 "name": "BaseBdev4", 00:11:33.502 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:33.502 "is_configured": true, 00:11:33.502 "data_offset": 2048, 00:11:33.502 "data_size": 63488 00:11:33.502 } 00:11:33.502 ] 00:11:33.502 }' 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.502 16:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 [2024-12-12 16:08:00.279111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.071 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.071 "name": "Existed_Raid", 00:11:34.071 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:34.071 "strip_size_kb": 64, 00:11:34.071 "state": "configuring", 00:11:34.071 "raid_level": "concat", 00:11:34.071 "superblock": true, 00:11:34.071 "num_base_bdevs": 4, 00:11:34.071 "num_base_bdevs_discovered": 3, 00:11:34.071 "num_base_bdevs_operational": 4, 00:11:34.071 "base_bdevs_list": [ 00:11:34.071 { 00:11:34.071 "name": null, 00:11:34.071 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:34.071 "is_configured": false, 00:11:34.071 "data_offset": 0, 00:11:34.071 "data_size": 63488 00:11:34.071 }, 00:11:34.071 { 00:11:34.071 "name": "BaseBdev2", 00:11:34.071 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:34.071 "is_configured": true, 00:11:34.071 "data_offset": 2048, 00:11:34.071 "data_size": 63488 00:11:34.071 }, 00:11:34.071 { 00:11:34.071 "name": "BaseBdev3", 00:11:34.071 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:34.071 "is_configured": true, 00:11:34.071 "data_offset": 2048, 00:11:34.072 "data_size": 63488 00:11:34.072 }, 00:11:34.072 { 00:11:34.072 "name": "BaseBdev4", 00:11:34.072 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:34.072 "is_configured": true, 00:11:34.072 "data_offset": 2048, 00:11:34.072 "data_size": 63488 00:11:34.072 } 00:11:34.072 ] 00:11:34.072 }' 00:11:34.072 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.072 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 577067f7-00a9-4201-a3f0-c52f76e232d2 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.639 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 [2024-12-12 16:08:00.861994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:34.639 [2024-12-12 16:08:00.862285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:34.639 [2024-12-12 16:08:00.862302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:34.639 NewBaseBdev 00:11:34.639 [2024-12-12 16:08:00.862606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:34.639 [2024-12-12 16:08:00.862765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:34.639 [2024-12-12 16:08:00.862778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:34.639 [2024-12-12 16:08:00.862939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.640 [ 00:11:34.640 { 00:11:34.640 "name": "NewBaseBdev", 00:11:34.640 "aliases": [ 00:11:34.640 "577067f7-00a9-4201-a3f0-c52f76e232d2" 00:11:34.640 ], 00:11:34.640 "product_name": "Malloc disk", 00:11:34.640 "block_size": 512, 00:11:34.640 "num_blocks": 65536, 00:11:34.640 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:34.640 "assigned_rate_limits": { 00:11:34.640 "rw_ios_per_sec": 0, 00:11:34.640 "rw_mbytes_per_sec": 0, 00:11:34.640 "r_mbytes_per_sec": 0, 00:11:34.640 "w_mbytes_per_sec": 0 00:11:34.640 }, 00:11:34.640 "claimed": true, 00:11:34.640 "claim_type": "exclusive_write", 00:11:34.640 "zoned": false, 00:11:34.640 "supported_io_types": { 00:11:34.640 "read": true, 00:11:34.640 "write": true, 00:11:34.640 "unmap": true, 00:11:34.640 "flush": true, 00:11:34.640 "reset": true, 00:11:34.640 "nvme_admin": false, 00:11:34.640 "nvme_io": false, 00:11:34.640 "nvme_io_md": false, 00:11:34.640 "write_zeroes": true, 00:11:34.640 "zcopy": true, 00:11:34.640 "get_zone_info": false, 00:11:34.640 "zone_management": false, 00:11:34.640 "zone_append": false, 00:11:34.640 "compare": false, 00:11:34.640 "compare_and_write": false, 00:11:34.640 "abort": true, 00:11:34.640 "seek_hole": false, 00:11:34.640 "seek_data": false, 00:11:34.640 "copy": true, 00:11:34.640 "nvme_iov_md": false 00:11:34.640 }, 00:11:34.640 "memory_domains": [ 00:11:34.640 { 00:11:34.640 "dma_device_id": "system", 00:11:34.640 "dma_device_type": 1 00:11:34.640 }, 00:11:34.640 { 00:11:34.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.640 "dma_device_type": 2 00:11:34.640 } 00:11:34.640 ], 00:11:34.640 "driver_specific": {} 00:11:34.640 } 00:11:34.640 ] 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.640 "name": "Existed_Raid", 00:11:34.640 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:34.640 "strip_size_kb": 64, 00:11:34.640 "state": "online", 00:11:34.640 "raid_level": "concat", 00:11:34.640 "superblock": true, 00:11:34.640 "num_base_bdevs": 4, 00:11:34.640 "num_base_bdevs_discovered": 4, 00:11:34.640 "num_base_bdevs_operational": 4, 00:11:34.640 "base_bdevs_list": [ 00:11:34.640 { 00:11:34.640 "name": "NewBaseBdev", 00:11:34.640 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:34.640 "is_configured": true, 00:11:34.640 "data_offset": 2048, 00:11:34.640 "data_size": 63488 00:11:34.640 }, 00:11:34.640 { 00:11:34.640 "name": "BaseBdev2", 00:11:34.640 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:34.640 "is_configured": true, 00:11:34.640 "data_offset": 2048, 00:11:34.640 "data_size": 63488 00:11:34.640 }, 00:11:34.640 { 00:11:34.640 "name": "BaseBdev3", 00:11:34.640 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:34.640 "is_configured": true, 00:11:34.640 "data_offset": 2048, 00:11:34.640 "data_size": 63488 00:11:34.640 }, 00:11:34.640 { 00:11:34.640 "name": "BaseBdev4", 00:11:34.640 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:34.640 "is_configured": true, 00:11:34.640 "data_offset": 2048, 00:11:34.640 "data_size": 63488 00:11:34.640 } 00:11:34.640 ] 00:11:34.640 }' 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.640 16:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.216 [2024-12-12 16:08:01.397518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.216 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.216 "name": "Existed_Raid", 00:11:35.216 "aliases": [ 00:11:35.216 "0972e4c5-cf89-4494-816b-76b1fbe33180" 00:11:35.216 ], 00:11:35.216 "product_name": "Raid Volume", 00:11:35.216 "block_size": 512, 00:11:35.216 "num_blocks": 253952, 00:11:35.216 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:35.216 "assigned_rate_limits": { 00:11:35.216 "rw_ios_per_sec": 0, 00:11:35.216 "rw_mbytes_per_sec": 0, 00:11:35.216 "r_mbytes_per_sec": 0, 00:11:35.216 "w_mbytes_per_sec": 0 00:11:35.216 }, 00:11:35.216 "claimed": false, 00:11:35.216 "zoned": false, 00:11:35.216 "supported_io_types": { 00:11:35.216 "read": true, 00:11:35.216 "write": true, 00:11:35.216 "unmap": true, 00:11:35.216 "flush": true, 00:11:35.216 "reset": true, 00:11:35.216 "nvme_admin": false, 00:11:35.216 "nvme_io": false, 00:11:35.216 "nvme_io_md": false, 00:11:35.216 "write_zeroes": true, 00:11:35.216 "zcopy": false, 00:11:35.216 "get_zone_info": false, 00:11:35.216 "zone_management": false, 00:11:35.216 "zone_append": false, 00:11:35.216 "compare": false, 00:11:35.216 "compare_and_write": false, 00:11:35.216 "abort": false, 00:11:35.216 "seek_hole": false, 00:11:35.216 "seek_data": false, 00:11:35.216 "copy": false, 00:11:35.216 "nvme_iov_md": false 00:11:35.216 }, 00:11:35.216 "memory_domains": [ 00:11:35.216 { 00:11:35.216 "dma_device_id": "system", 00:11:35.216 "dma_device_type": 1 00:11:35.216 }, 00:11:35.216 { 00:11:35.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.216 "dma_device_type": 2 00:11:35.216 }, 00:11:35.216 { 00:11:35.216 "dma_device_id": "system", 00:11:35.216 "dma_device_type": 1 00:11:35.216 }, 00:11:35.216 { 00:11:35.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.216 "dma_device_type": 2 00:11:35.216 }, 00:11:35.216 { 00:11:35.216 "dma_device_id": "system", 00:11:35.216 "dma_device_type": 1 00:11:35.216 }, 00:11:35.216 { 00:11:35.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.216 "dma_device_type": 2 00:11:35.216 }, 00:11:35.216 { 00:11:35.216 "dma_device_id": "system", 00:11:35.216 "dma_device_type": 1 00:11:35.216 }, 00:11:35.216 { 00:11:35.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.216 "dma_device_type": 2 00:11:35.216 } 00:11:35.216 ], 00:11:35.216 "driver_specific": { 00:11:35.216 "raid": { 00:11:35.216 "uuid": "0972e4c5-cf89-4494-816b-76b1fbe33180", 00:11:35.216 "strip_size_kb": 64, 00:11:35.217 "state": "online", 00:11:35.217 "raid_level": "concat", 00:11:35.217 "superblock": true, 00:11:35.217 "num_base_bdevs": 4, 00:11:35.217 "num_base_bdevs_discovered": 4, 00:11:35.217 "num_base_bdevs_operational": 4, 00:11:35.217 "base_bdevs_list": [ 00:11:35.217 { 00:11:35.217 "name": "NewBaseBdev", 00:11:35.217 "uuid": "577067f7-00a9-4201-a3f0-c52f76e232d2", 00:11:35.217 "is_configured": true, 00:11:35.217 "data_offset": 2048, 00:11:35.217 "data_size": 63488 00:11:35.217 }, 00:11:35.217 { 00:11:35.217 "name": "BaseBdev2", 00:11:35.217 "uuid": "96ed9f53-f96c-44ba-8cce-15d078db6d73", 00:11:35.217 "is_configured": true, 00:11:35.217 "data_offset": 2048, 00:11:35.217 "data_size": 63488 00:11:35.217 }, 00:11:35.217 { 00:11:35.217 "name": "BaseBdev3", 00:11:35.217 "uuid": "1c00cf7b-7349-467c-9d2a-ef4f65b89fd2", 00:11:35.217 "is_configured": true, 00:11:35.217 "data_offset": 2048, 00:11:35.217 "data_size": 63488 00:11:35.217 }, 00:11:35.217 { 00:11:35.217 "name": "BaseBdev4", 00:11:35.217 "uuid": "913e4a1f-9b29-42de-8cab-3c763fe922ee", 00:11:35.217 "is_configured": true, 00:11:35.217 "data_offset": 2048, 00:11:35.217 "data_size": 63488 00:11:35.217 } 00:11:35.217 ] 00:11:35.217 } 00:11:35.217 } 00:11:35.217 }' 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:35.217 BaseBdev2 00:11:35.217 BaseBdev3 00:11:35.217 BaseBdev4' 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.217 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.475 [2024-12-12 16:08:01.692627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.475 [2024-12-12 16:08:01.692679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.475 [2024-12-12 16:08:01.692786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.475 [2024-12-12 16:08:01.692876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.475 [2024-12-12 16:08:01.692906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73994 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73994 ']' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73994 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73994 00:11:35.475 killing process with pid 73994 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73994' 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73994 00:11:35.475 16:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73994 00:11:35.475 [2024-12-12 16:08:01.726842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.041 [2024-12-12 16:08:02.174784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.418 16:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:37.418 00:11:37.419 real 0m12.008s 00:11:37.419 user 0m18.968s 00:11:37.419 sys 0m2.027s 00:11:37.419 16:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.419 16:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.419 ************************************ 00:11:37.419 END TEST raid_state_function_test_sb 00:11:37.419 ************************************ 00:11:37.419 16:08:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:37.419 16:08:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.419 16:08:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.419 16:08:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.419 ************************************ 00:11:37.419 START TEST raid_superblock_test 00:11:37.419 ************************************ 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74670 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74670 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74670 ']' 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.419 16:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:37.419 [2024-12-12 16:08:03.592520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:37.419 [2024-12-12 16:08:03.593205] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74670 ] 00:11:37.419 [2024-12-12 16:08:03.767967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.678 [2024-12-12 16:08:03.907354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.937 [2024-12-12 16:08:04.142884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.937 [2024-12-12 16:08:04.142969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 malloc1 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 [2024-12-12 16:08:04.473857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:38.197 [2024-12-12 16:08:04.474150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.197 [2024-12-12 16:08:04.474186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:38.197 [2024-12-12 16:08:04.474197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.197 [2024-12-12 16:08:04.476636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.197 [2024-12-12 16:08:04.476674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:38.197 pt1 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 malloc2 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 [2024-12-12 16:08:04.534099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.197 [2024-12-12 16:08:04.534176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.197 [2024-12-12 16:08:04.534200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:38.197 [2024-12-12 16:08:04.534221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.197 [2024-12-12 16:08:04.536614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.197 [2024-12-12 16:08:04.536653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.197 pt2 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.198 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.457 malloc3 00:11:38.457 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.457 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:38.457 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.457 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.457 [2024-12-12 16:08:04.608164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:38.457 [2024-12-12 16:08:04.608229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.458 [2024-12-12 16:08:04.608254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:38.458 [2024-12-12 16:08:04.608264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.458 [2024-12-12 16:08:04.610620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.458 [2024-12-12 16:08:04.610656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:38.458 pt3 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 malloc4 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 [2024-12-12 16:08:04.669040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:38.458 [2024-12-12 16:08:04.669114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.458 [2024-12-12 16:08:04.669136] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:38.458 [2024-12-12 16:08:04.669146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.458 [2024-12-12 16:08:04.671457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.458 [2024-12-12 16:08:04.671490] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:38.458 pt4 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 [2024-12-12 16:08:04.681076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.458 [2024-12-12 16:08:04.683110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.458 [2024-12-12 16:08:04.683199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:38.458 [2024-12-12 16:08:04.683247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:38.458 [2024-12-12 16:08:04.683429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:38.458 [2024-12-12 16:08:04.683446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.458 [2024-12-12 16:08:04.683716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.458 [2024-12-12 16:08:04.683907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:38.458 [2024-12-12 16:08:04.683928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:38.458 [2024-12-12 16:08:04.684085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.458 "name": "raid_bdev1", 00:11:38.458 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:38.458 "strip_size_kb": 64, 00:11:38.458 "state": "online", 00:11:38.458 "raid_level": "concat", 00:11:38.458 "superblock": true, 00:11:38.458 "num_base_bdevs": 4, 00:11:38.458 "num_base_bdevs_discovered": 4, 00:11:38.458 "num_base_bdevs_operational": 4, 00:11:38.458 "base_bdevs_list": [ 00:11:38.458 { 00:11:38.458 "name": "pt1", 00:11:38.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.458 "is_configured": true, 00:11:38.458 "data_offset": 2048, 00:11:38.458 "data_size": 63488 00:11:38.458 }, 00:11:38.458 { 00:11:38.458 "name": "pt2", 00:11:38.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.458 "is_configured": true, 00:11:38.458 "data_offset": 2048, 00:11:38.458 "data_size": 63488 00:11:38.458 }, 00:11:38.458 { 00:11:38.458 "name": "pt3", 00:11:38.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.458 "is_configured": true, 00:11:38.458 "data_offset": 2048, 00:11:38.458 "data_size": 63488 00:11:38.458 }, 00:11:38.458 { 00:11:38.458 "name": "pt4", 00:11:38.458 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.458 "is_configured": true, 00:11:38.458 "data_offset": 2048, 00:11:38.458 "data_size": 63488 00:11:38.458 } 00:11:38.458 ] 00:11:38.458 }' 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.458 16:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.717 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.980 [2024-12-12 16:08:05.068789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.980 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.980 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.980 "name": "raid_bdev1", 00:11:38.980 "aliases": [ 00:11:38.980 "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd" 00:11:38.980 ], 00:11:38.980 "product_name": "Raid Volume", 00:11:38.980 "block_size": 512, 00:11:38.980 "num_blocks": 253952, 00:11:38.980 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:38.980 "assigned_rate_limits": { 00:11:38.980 "rw_ios_per_sec": 0, 00:11:38.980 "rw_mbytes_per_sec": 0, 00:11:38.980 "r_mbytes_per_sec": 0, 00:11:38.980 "w_mbytes_per_sec": 0 00:11:38.980 }, 00:11:38.980 "claimed": false, 00:11:38.980 "zoned": false, 00:11:38.980 "supported_io_types": { 00:11:38.980 "read": true, 00:11:38.980 "write": true, 00:11:38.980 "unmap": true, 00:11:38.980 "flush": true, 00:11:38.980 "reset": true, 00:11:38.980 "nvme_admin": false, 00:11:38.980 "nvme_io": false, 00:11:38.980 "nvme_io_md": false, 00:11:38.980 "write_zeroes": true, 00:11:38.980 "zcopy": false, 00:11:38.980 "get_zone_info": false, 00:11:38.980 "zone_management": false, 00:11:38.980 "zone_append": false, 00:11:38.980 "compare": false, 00:11:38.980 "compare_and_write": false, 00:11:38.980 "abort": false, 00:11:38.980 "seek_hole": false, 00:11:38.980 "seek_data": false, 00:11:38.980 "copy": false, 00:11:38.980 "nvme_iov_md": false 00:11:38.980 }, 00:11:38.980 "memory_domains": [ 00:11:38.980 { 00:11:38.980 "dma_device_id": "system", 00:11:38.980 "dma_device_type": 1 00:11:38.980 }, 00:11:38.980 { 00:11:38.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.980 "dma_device_type": 2 00:11:38.980 }, 00:11:38.980 { 00:11:38.980 "dma_device_id": "system", 00:11:38.980 "dma_device_type": 1 00:11:38.980 }, 00:11:38.980 { 00:11:38.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.980 "dma_device_type": 2 00:11:38.980 }, 00:11:38.980 { 00:11:38.980 "dma_device_id": "system", 00:11:38.980 "dma_device_type": 1 00:11:38.980 }, 00:11:38.980 { 00:11:38.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.980 "dma_device_type": 2 00:11:38.980 }, 00:11:38.980 { 00:11:38.980 "dma_device_id": "system", 00:11:38.980 "dma_device_type": 1 00:11:38.980 }, 00:11:38.980 { 00:11:38.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.980 "dma_device_type": 2 00:11:38.980 } 00:11:38.980 ], 00:11:38.980 "driver_specific": { 00:11:38.980 "raid": { 00:11:38.980 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:38.980 "strip_size_kb": 64, 00:11:38.980 "state": "online", 00:11:38.980 "raid_level": "concat", 00:11:38.980 "superblock": true, 00:11:38.980 "num_base_bdevs": 4, 00:11:38.980 "num_base_bdevs_discovered": 4, 00:11:38.980 "num_base_bdevs_operational": 4, 00:11:38.981 "base_bdevs_list": [ 00:11:38.981 { 00:11:38.981 "name": "pt1", 00:11:38.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.981 "is_configured": true, 00:11:38.981 "data_offset": 2048, 00:11:38.981 "data_size": 63488 00:11:38.981 }, 00:11:38.981 { 00:11:38.981 "name": "pt2", 00:11:38.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.981 "is_configured": true, 00:11:38.981 "data_offset": 2048, 00:11:38.981 "data_size": 63488 00:11:38.981 }, 00:11:38.981 { 00:11:38.981 "name": "pt3", 00:11:38.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.981 "is_configured": true, 00:11:38.981 "data_offset": 2048, 00:11:38.981 "data_size": 63488 00:11:38.981 }, 00:11:38.981 { 00:11:38.981 "name": "pt4", 00:11:38.981 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.981 "is_configured": true, 00:11:38.981 "data_offset": 2048, 00:11:38.981 "data_size": 63488 00:11:38.981 } 00:11:38.981 ] 00:11:38.981 } 00:11:38.981 } 00:11:38.981 }' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:38.981 pt2 00:11:38.981 pt3 00:11:38.981 pt4' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.981 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 [2024-12-12 16:08:05.332281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d5a857e-9251-49b1-b8c4-0ba855a8f7fd 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d5a857e-9251-49b1-b8c4-0ba855a8f7fd ']' 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 [2024-12-12 16:08:05.363937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.247 [2024-12-12 16:08:05.363966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.247 [2024-12-12 16:08:05.364060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.247 [2024-12-12 16:08:05.364141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.247 [2024-12-12 16:08:05.364161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.247 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 [2024-12-12 16:08:05.499761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:39.248 [2024-12-12 16:08:05.501855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:39.248 [2024-12-12 16:08:05.501922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:39.248 [2024-12-12 16:08:05.501961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:39.248 [2024-12-12 16:08:05.502020] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:39.248 [2024-12-12 16:08:05.502079] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:39.248 [2024-12-12 16:08:05.502101] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:39.248 [2024-12-12 16:08:05.502122] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:39.248 [2024-12-12 16:08:05.502137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.248 [2024-12-12 16:08:05.502152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:39.248 request: 00:11:39.248 { 00:11:39.248 "name": "raid_bdev1", 00:11:39.248 "raid_level": "concat", 00:11:39.248 "base_bdevs": [ 00:11:39.248 "malloc1", 00:11:39.248 "malloc2", 00:11:39.248 "malloc3", 00:11:39.248 "malloc4" 00:11:39.248 ], 00:11:39.248 "strip_size_kb": 64, 00:11:39.248 "superblock": false, 00:11:39.248 "method": "bdev_raid_create", 00:11:39.248 "req_id": 1 00:11:39.248 } 00:11:39.248 Got JSON-RPC error response 00:11:39.248 response: 00:11:39.248 { 00:11:39.248 "code": -17, 00:11:39.248 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:39.248 } 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 [2024-12-12 16:08:05.559629] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.248 [2024-12-12 16:08:05.559682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.248 [2024-12-12 16:08:05.559699] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.248 [2024-12-12 16:08:05.559710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.248 [2024-12-12 16:08:05.562222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.248 [2024-12-12 16:08:05.562259] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.248 [2024-12-12 16:08:05.562343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:39.248 [2024-12-12 16:08:05.562398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.248 pt1 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.508 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.508 "name": "raid_bdev1", 00:11:39.508 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:39.508 "strip_size_kb": 64, 00:11:39.508 "state": "configuring", 00:11:39.508 "raid_level": "concat", 00:11:39.508 "superblock": true, 00:11:39.508 "num_base_bdevs": 4, 00:11:39.508 "num_base_bdevs_discovered": 1, 00:11:39.508 "num_base_bdevs_operational": 4, 00:11:39.508 "base_bdevs_list": [ 00:11:39.508 { 00:11:39.508 "name": "pt1", 00:11:39.508 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.508 "is_configured": true, 00:11:39.508 "data_offset": 2048, 00:11:39.508 "data_size": 63488 00:11:39.508 }, 00:11:39.508 { 00:11:39.508 "name": null, 00:11:39.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.508 "is_configured": false, 00:11:39.508 "data_offset": 2048, 00:11:39.508 "data_size": 63488 00:11:39.508 }, 00:11:39.508 { 00:11:39.508 "name": null, 00:11:39.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.508 "is_configured": false, 00:11:39.508 "data_offset": 2048, 00:11:39.508 "data_size": 63488 00:11:39.508 }, 00:11:39.508 { 00:11:39.508 "name": null, 00:11:39.508 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.508 "is_configured": false, 00:11:39.508 "data_offset": 2048, 00:11:39.508 "data_size": 63488 00:11:39.508 } 00:11:39.508 ] 00:11:39.508 }' 00:11:39.508 16:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.508 16:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.768 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:39.768 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.769 [2024-12-12 16:08:06.026935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.769 [2024-12-12 16:08:06.027044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.769 [2024-12-12 16:08:06.027069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:39.769 [2024-12-12 16:08:06.027081] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.769 [2024-12-12 16:08:06.027636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.769 [2024-12-12 16:08:06.027667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.769 [2024-12-12 16:08:06.027770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:39.769 [2024-12-12 16:08:06.027805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.769 pt2 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.769 [2024-12-12 16:08:06.038879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.769 "name": "raid_bdev1", 00:11:39.769 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:39.769 "strip_size_kb": 64, 00:11:39.769 "state": "configuring", 00:11:39.769 "raid_level": "concat", 00:11:39.769 "superblock": true, 00:11:39.769 "num_base_bdevs": 4, 00:11:39.769 "num_base_bdevs_discovered": 1, 00:11:39.769 "num_base_bdevs_operational": 4, 00:11:39.769 "base_bdevs_list": [ 00:11:39.769 { 00:11:39.769 "name": "pt1", 00:11:39.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.769 "is_configured": true, 00:11:39.769 "data_offset": 2048, 00:11:39.769 "data_size": 63488 00:11:39.769 }, 00:11:39.769 { 00:11:39.769 "name": null, 00:11:39.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.769 "is_configured": false, 00:11:39.769 "data_offset": 0, 00:11:39.769 "data_size": 63488 00:11:39.769 }, 00:11:39.769 { 00:11:39.769 "name": null, 00:11:39.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.769 "is_configured": false, 00:11:39.769 "data_offset": 2048, 00:11:39.769 "data_size": 63488 00:11:39.769 }, 00:11:39.769 { 00:11:39.769 "name": null, 00:11:39.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.769 "is_configured": false, 00:11:39.769 "data_offset": 2048, 00:11:39.769 "data_size": 63488 00:11:39.769 } 00:11:39.769 ] 00:11:39.769 }' 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.769 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.340 [2024-12-12 16:08:06.446219] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.340 [2024-12-12 16:08:06.446317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.340 [2024-12-12 16:08:06.446344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:40.340 [2024-12-12 16:08:06.446357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.340 [2024-12-12 16:08:06.446931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.340 [2024-12-12 16:08:06.446959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.340 [2024-12-12 16:08:06.447071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:40.340 [2024-12-12 16:08:06.447102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.340 pt2 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.340 [2024-12-12 16:08:06.458128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.340 [2024-12-12 16:08:06.458184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.340 [2024-12-12 16:08:06.458206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:40.340 [2024-12-12 16:08:06.458218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.340 [2024-12-12 16:08:06.458660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.340 [2024-12-12 16:08:06.458686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.340 [2024-12-12 16:08:06.458762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:40.340 [2024-12-12 16:08:06.458795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.340 pt3 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.340 [2024-12-12 16:08:06.470075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:40.340 [2024-12-12 16:08:06.470120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.340 [2024-12-12 16:08:06.470137] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:40.340 [2024-12-12 16:08:06.470144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.340 [2024-12-12 16:08:06.470534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.340 [2024-12-12 16:08:06.470560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:40.340 [2024-12-12 16:08:06.470631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:40.340 [2024-12-12 16:08:06.470658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:40.340 [2024-12-12 16:08:06.470807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:40.340 [2024-12-12 16:08:06.470819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:40.340 [2024-12-12 16:08:06.471108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:40.340 [2024-12-12 16:08:06.471276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:40.340 [2024-12-12 16:08:06.471293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:40.340 [2024-12-12 16:08:06.471430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.340 pt4 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.340 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.340 "name": "raid_bdev1", 00:11:40.340 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:40.340 "strip_size_kb": 64, 00:11:40.340 "state": "online", 00:11:40.340 "raid_level": "concat", 00:11:40.340 "superblock": true, 00:11:40.340 "num_base_bdevs": 4, 00:11:40.340 "num_base_bdevs_discovered": 4, 00:11:40.340 "num_base_bdevs_operational": 4, 00:11:40.340 "base_bdevs_list": [ 00:11:40.340 { 00:11:40.340 "name": "pt1", 00:11:40.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.340 "is_configured": true, 00:11:40.340 "data_offset": 2048, 00:11:40.340 "data_size": 63488 00:11:40.340 }, 00:11:40.340 { 00:11:40.340 "name": "pt2", 00:11:40.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.340 "is_configured": true, 00:11:40.340 "data_offset": 2048, 00:11:40.340 "data_size": 63488 00:11:40.340 }, 00:11:40.340 { 00:11:40.341 "name": "pt3", 00:11:40.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.341 "is_configured": true, 00:11:40.341 "data_offset": 2048, 00:11:40.341 "data_size": 63488 00:11:40.341 }, 00:11:40.341 { 00:11:40.341 "name": "pt4", 00:11:40.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.341 "is_configured": true, 00:11:40.341 "data_offset": 2048, 00:11:40.341 "data_size": 63488 00:11:40.341 } 00:11:40.341 ] 00:11:40.341 }' 00:11:40.341 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.341 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.601 [2024-12-12 16:08:06.873859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.601 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.601 "name": "raid_bdev1", 00:11:40.601 "aliases": [ 00:11:40.601 "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd" 00:11:40.601 ], 00:11:40.601 "product_name": "Raid Volume", 00:11:40.601 "block_size": 512, 00:11:40.601 "num_blocks": 253952, 00:11:40.601 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:40.601 "assigned_rate_limits": { 00:11:40.601 "rw_ios_per_sec": 0, 00:11:40.601 "rw_mbytes_per_sec": 0, 00:11:40.601 "r_mbytes_per_sec": 0, 00:11:40.601 "w_mbytes_per_sec": 0 00:11:40.601 }, 00:11:40.601 "claimed": false, 00:11:40.601 "zoned": false, 00:11:40.601 "supported_io_types": { 00:11:40.601 "read": true, 00:11:40.601 "write": true, 00:11:40.601 "unmap": true, 00:11:40.601 "flush": true, 00:11:40.601 "reset": true, 00:11:40.601 "nvme_admin": false, 00:11:40.601 "nvme_io": false, 00:11:40.601 "nvme_io_md": false, 00:11:40.601 "write_zeroes": true, 00:11:40.601 "zcopy": false, 00:11:40.601 "get_zone_info": false, 00:11:40.601 "zone_management": false, 00:11:40.601 "zone_append": false, 00:11:40.601 "compare": false, 00:11:40.601 "compare_and_write": false, 00:11:40.601 "abort": false, 00:11:40.601 "seek_hole": false, 00:11:40.601 "seek_data": false, 00:11:40.601 "copy": false, 00:11:40.601 "nvme_iov_md": false 00:11:40.601 }, 00:11:40.602 "memory_domains": [ 00:11:40.602 { 00:11:40.602 "dma_device_id": "system", 00:11:40.602 "dma_device_type": 1 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.602 "dma_device_type": 2 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "dma_device_id": "system", 00:11:40.602 "dma_device_type": 1 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.602 "dma_device_type": 2 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "dma_device_id": "system", 00:11:40.602 "dma_device_type": 1 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.602 "dma_device_type": 2 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "dma_device_id": "system", 00:11:40.602 "dma_device_type": 1 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.602 "dma_device_type": 2 00:11:40.602 } 00:11:40.602 ], 00:11:40.602 "driver_specific": { 00:11:40.602 "raid": { 00:11:40.602 "uuid": "4d5a857e-9251-49b1-b8c4-0ba855a8f7fd", 00:11:40.602 "strip_size_kb": 64, 00:11:40.602 "state": "online", 00:11:40.602 "raid_level": "concat", 00:11:40.602 "superblock": true, 00:11:40.602 "num_base_bdevs": 4, 00:11:40.602 "num_base_bdevs_discovered": 4, 00:11:40.602 "num_base_bdevs_operational": 4, 00:11:40.602 "base_bdevs_list": [ 00:11:40.602 { 00:11:40.602 "name": "pt1", 00:11:40.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.602 "is_configured": true, 00:11:40.602 "data_offset": 2048, 00:11:40.602 "data_size": 63488 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "name": "pt2", 00:11:40.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.602 "is_configured": true, 00:11:40.602 "data_offset": 2048, 00:11:40.602 "data_size": 63488 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "name": "pt3", 00:11:40.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.602 "is_configured": true, 00:11:40.602 "data_offset": 2048, 00:11:40.602 "data_size": 63488 00:11:40.602 }, 00:11:40.602 { 00:11:40.602 "name": "pt4", 00:11:40.602 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.602 "is_configured": true, 00:11:40.602 "data_offset": 2048, 00:11:40.602 "data_size": 63488 00:11:40.602 } 00:11:40.602 ] 00:11:40.602 } 00:11:40.602 } 00:11:40.602 }' 00:11:40.602 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:40.862 pt2 00:11:40.862 pt3 00:11:40.862 pt4' 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.862 16:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.862 [2024-12-12 16:08:07.145316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d5a857e-9251-49b1-b8c4-0ba855a8f7fd '!=' 4d5a857e-9251-49b1-b8c4-0ba855a8f7fd ']' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74670 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74670 ']' 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74670 00:11:40.862 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:40.863 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.863 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74670 00:11:41.133 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.133 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.133 killing process with pid 74670 00:11:41.133 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74670' 00:11:41.133 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74670 00:11:41.133 [2024-12-12 16:08:07.216354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.133 16:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74670 00:11:41.133 [2024-12-12 16:08:07.216482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.133 [2024-12-12 16:08:07.216571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.133 [2024-12-12 16:08:07.216586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:41.400 [2024-12-12 16:08:07.649711] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.783 16:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:42.783 00:11:42.783 real 0m5.381s 00:11:42.783 user 0m7.393s 00:11:42.783 sys 0m1.005s 00:11:42.783 16:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.783 16:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.783 ************************************ 00:11:42.783 END TEST raid_superblock_test 00:11:42.783 ************************************ 00:11:42.783 16:08:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:42.783 16:08:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:42.783 16:08:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.783 16:08:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.783 ************************************ 00:11:42.783 START TEST raid_read_error_test 00:11:42.783 ************************************ 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1NIPyIm47P 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74930 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74930 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74930 ']' 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.783 16:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.783 [2024-12-12 16:08:09.065684] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:42.783 [2024-12-12 16:08:09.065804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74930 ] 00:11:43.044 [2024-12-12 16:08:09.227944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.044 [2024-12-12 16:08:09.366712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.304 [2024-12-12 16:08:09.608325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.304 [2024-12-12 16:08:09.608398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.564 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.564 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:43.564 16:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.564 16:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:43.564 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.564 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 BaseBdev1_malloc 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 true 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 [2024-12-12 16:08:09.965089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:43.824 [2024-12-12 16:08:09.965158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.824 [2024-12-12 16:08:09.965179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:43.824 [2024-12-12 16:08:09.965192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.824 [2024-12-12 16:08:09.967535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.824 [2024-12-12 16:08:09.967575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:43.824 BaseBdev1 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.824 16:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 BaseBdev2_malloc 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 true 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 [2024-12-12 16:08:10.040712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:43.824 [2024-12-12 16:08:10.040788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.824 [2024-12-12 16:08:10.040808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:43.824 [2024-12-12 16:08:10.040821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.824 [2024-12-12 16:08:10.043303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.824 [2024-12-12 16:08:10.043341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:43.824 BaseBdev2 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.824 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 BaseBdev3_malloc 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.825 true 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.825 [2024-12-12 16:08:10.126486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:43.825 [2024-12-12 16:08:10.126554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.825 [2024-12-12 16:08:10.126572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:43.825 [2024-12-12 16:08:10.126584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.825 [2024-12-12 16:08:10.128973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.825 [2024-12-12 16:08:10.129009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:43.825 BaseBdev3 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 BaseBdev4_malloc 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 true 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 [2024-12-12 16:08:10.200680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:44.085 [2024-12-12 16:08:10.200748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.085 [2024-12-12 16:08:10.200767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:44.085 [2024-12-12 16:08:10.200779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.085 [2024-12-12 16:08:10.203237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.085 [2024-12-12 16:08:10.203275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:44.085 BaseBdev4 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.085 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 [2024-12-12 16:08:10.212754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.085 [2024-12-12 16:08:10.214953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.085 [2024-12-12 16:08:10.215037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.085 [2024-12-12 16:08:10.215104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.085 [2024-12-12 16:08:10.215342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:44.085 [2024-12-12 16:08:10.215363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.086 [2024-12-12 16:08:10.215639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:44.086 [2024-12-12 16:08:10.215835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:44.086 [2024-12-12 16:08:10.215855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:44.086 [2024-12-12 16:08:10.216046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.086 "name": "raid_bdev1", 00:11:44.086 "uuid": "2cf4509e-58f1-4aa3-b762-0faf206f2dc4", 00:11:44.086 "strip_size_kb": 64, 00:11:44.086 "state": "online", 00:11:44.086 "raid_level": "concat", 00:11:44.086 "superblock": true, 00:11:44.086 "num_base_bdevs": 4, 00:11:44.086 "num_base_bdevs_discovered": 4, 00:11:44.086 "num_base_bdevs_operational": 4, 00:11:44.086 "base_bdevs_list": [ 00:11:44.086 { 00:11:44.086 "name": "BaseBdev1", 00:11:44.086 "uuid": "7cf5d72f-fac5-5c84-affc-6375a36bb1c5", 00:11:44.086 "is_configured": true, 00:11:44.086 "data_offset": 2048, 00:11:44.086 "data_size": 63488 00:11:44.086 }, 00:11:44.086 { 00:11:44.086 "name": "BaseBdev2", 00:11:44.086 "uuid": "fd597ab2-a96a-5cd4-9a56-4e421d9e7ded", 00:11:44.086 "is_configured": true, 00:11:44.086 "data_offset": 2048, 00:11:44.086 "data_size": 63488 00:11:44.086 }, 00:11:44.086 { 00:11:44.086 "name": "BaseBdev3", 00:11:44.086 "uuid": "a5ef3bd1-5e9e-5cc1-924c-83f26d752875", 00:11:44.086 "is_configured": true, 00:11:44.086 "data_offset": 2048, 00:11:44.086 "data_size": 63488 00:11:44.086 }, 00:11:44.086 { 00:11:44.086 "name": "BaseBdev4", 00:11:44.086 "uuid": "ba6ef131-b1ee-53eb-8663-08a56d8a9d96", 00:11:44.086 "is_configured": true, 00:11:44.086 "data_offset": 2048, 00:11:44.086 "data_size": 63488 00:11:44.086 } 00:11:44.086 ] 00:11:44.086 }' 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.086 16:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.346 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:44.346 16:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:44.606 [2024-12-12 16:08:10.709589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.547 "name": "raid_bdev1", 00:11:45.547 "uuid": "2cf4509e-58f1-4aa3-b762-0faf206f2dc4", 00:11:45.547 "strip_size_kb": 64, 00:11:45.547 "state": "online", 00:11:45.547 "raid_level": "concat", 00:11:45.547 "superblock": true, 00:11:45.547 "num_base_bdevs": 4, 00:11:45.547 "num_base_bdevs_discovered": 4, 00:11:45.547 "num_base_bdevs_operational": 4, 00:11:45.547 "base_bdevs_list": [ 00:11:45.547 { 00:11:45.547 "name": "BaseBdev1", 00:11:45.547 "uuid": "7cf5d72f-fac5-5c84-affc-6375a36bb1c5", 00:11:45.547 "is_configured": true, 00:11:45.547 "data_offset": 2048, 00:11:45.547 "data_size": 63488 00:11:45.547 }, 00:11:45.547 { 00:11:45.547 "name": "BaseBdev2", 00:11:45.547 "uuid": "fd597ab2-a96a-5cd4-9a56-4e421d9e7ded", 00:11:45.547 "is_configured": true, 00:11:45.547 "data_offset": 2048, 00:11:45.547 "data_size": 63488 00:11:45.547 }, 00:11:45.547 { 00:11:45.547 "name": "BaseBdev3", 00:11:45.547 "uuid": "a5ef3bd1-5e9e-5cc1-924c-83f26d752875", 00:11:45.547 "is_configured": true, 00:11:45.547 "data_offset": 2048, 00:11:45.547 "data_size": 63488 00:11:45.547 }, 00:11:45.547 { 00:11:45.547 "name": "BaseBdev4", 00:11:45.547 "uuid": "ba6ef131-b1ee-53eb-8663-08a56d8a9d96", 00:11:45.547 "is_configured": true, 00:11:45.547 "data_offset": 2048, 00:11:45.547 "data_size": 63488 00:11:45.547 } 00:11:45.547 ] 00:11:45.547 }' 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.547 16:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.807 16:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.807 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.807 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.807 [2024-12-12 16:08:12.070657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.807 [2024-12-12 16:08:12.070722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.807 [2024-12-12 16:08:12.073304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.807 [2024-12-12 16:08:12.073392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.807 [2024-12-12 16:08:12.073441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.807 [2024-12-12 16:08:12.073458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:45.807 { 00:11:45.807 "results": [ 00:11:45.807 { 00:11:45.807 "job": "raid_bdev1", 00:11:45.807 "core_mask": "0x1", 00:11:45.807 "workload": "randrw", 00:11:45.807 "percentage": 50, 00:11:45.807 "status": "finished", 00:11:45.807 "queue_depth": 1, 00:11:45.807 "io_size": 131072, 00:11:45.807 "runtime": 1.361628, 00:11:45.807 "iops": 13170.263831237313, 00:11:45.807 "mibps": 1646.282978904664, 00:11:45.807 "io_failed": 1, 00:11:45.807 "io_timeout": 0, 00:11:45.807 "avg_latency_us": 106.9216795401674, 00:11:45.807 "min_latency_us": 27.612227074235808, 00:11:45.807 "max_latency_us": 1345.0620087336245 00:11:45.807 } 00:11:45.807 ], 00:11:45.807 "core_count": 1 00:11:45.807 } 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74930 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74930 ']' 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74930 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74930 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.808 killing process with pid 74930 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74930' 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74930 00:11:45.808 [2024-12-12 16:08:12.119177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.808 16:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74930 00:11:46.378 [2024-12-12 16:08:12.477351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1NIPyIm47P 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.762 16:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:47.762 00:11:47.763 real 0m4.941s 00:11:47.763 user 0m5.622s 00:11:47.763 sys 0m0.684s 00:11:47.763 16:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.763 16:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.763 ************************************ 00:11:47.763 END TEST raid_read_error_test 00:11:47.763 ************************************ 00:11:47.763 16:08:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:47.763 16:08:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:47.763 16:08:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.763 16:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.763 ************************************ 00:11:47.763 START TEST raid_write_error_test 00:11:47.763 ************************************ 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xypy2dCOhr 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75077 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75077 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75077 ']' 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.763 16:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.763 [2024-12-12 16:08:14.074656] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:47.763 [2024-12-12 16:08:14.074775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75077 ] 00:11:48.023 [2024-12-12 16:08:14.247850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.283 [2024-12-12 16:08:14.391340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.283 [2024-12-12 16:08:14.629977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.283 [2024-12-12 16:08:14.630017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 BaseBdev1_malloc 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 true 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 [2024-12-12 16:08:14.967387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:48.853 [2024-12-12 16:08:14.967459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.853 [2024-12-12 16:08:14.967483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:48.853 [2024-12-12 16:08:14.967495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.853 [2024-12-12 16:08:14.969976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.853 [2024-12-12 16:08:14.970020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:48.853 BaseBdev1 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 BaseBdev2_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 true 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 [2024-12-12 16:08:15.031132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:48.853 [2024-12-12 16:08:15.031201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.853 [2024-12-12 16:08:15.031219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:48.853 [2024-12-12 16:08:15.031232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.853 [2024-12-12 16:08:15.033681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.853 [2024-12-12 16:08:15.033720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:48.853 BaseBdev2 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 BaseBdev3_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 true 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 [2024-12-12 16:08:15.105622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:48.853 [2024-12-12 16:08:15.105684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.853 [2024-12-12 16:08:15.105703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:48.853 [2024-12-12 16:08:15.105715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.853 [2024-12-12 16:08:15.108107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.853 [2024-12-12 16:08:15.108147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:48.853 BaseBdev3 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 BaseBdev4_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 true 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 [2024-12-12 16:08:15.175005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:48.853 [2024-12-12 16:08:15.175069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.853 [2024-12-12 16:08:15.175090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:48.853 [2024-12-12 16:08:15.175103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.853 [2024-12-12 16:08:15.177747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.853 [2024-12-12 16:08:15.177787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:48.853 BaseBdev4 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.853 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 [2024-12-12 16:08:15.183086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.853 [2024-12-12 16:08:15.185385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.853 [2024-12-12 16:08:15.185469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.853 [2024-12-12 16:08:15.185534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:48.854 [2024-12-12 16:08:15.185773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:48.854 [2024-12-12 16:08:15.185794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:48.854 [2024-12-12 16:08:15.186068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:48.854 [2024-12-12 16:08:15.186251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:48.854 [2024-12-12 16:08:15.186269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:48.854 [2024-12-12 16:08:15.186452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.113 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.113 "name": "raid_bdev1", 00:11:49.113 "uuid": "1d465a85-4172-4eb8-8985-60a20277b093", 00:11:49.113 "strip_size_kb": 64, 00:11:49.113 "state": "online", 00:11:49.113 "raid_level": "concat", 00:11:49.113 "superblock": true, 00:11:49.113 "num_base_bdevs": 4, 00:11:49.113 "num_base_bdevs_discovered": 4, 00:11:49.113 "num_base_bdevs_operational": 4, 00:11:49.113 "base_bdevs_list": [ 00:11:49.113 { 00:11:49.113 "name": "BaseBdev1", 00:11:49.113 "uuid": "9b4fed3e-ed3d-5d14-830e-7a60067e3aa4", 00:11:49.113 "is_configured": true, 00:11:49.113 "data_offset": 2048, 00:11:49.113 "data_size": 63488 00:11:49.113 }, 00:11:49.113 { 00:11:49.113 "name": "BaseBdev2", 00:11:49.113 "uuid": "375c18c9-a01a-5fd5-8bcd-3c0f3564b2b7", 00:11:49.113 "is_configured": true, 00:11:49.113 "data_offset": 2048, 00:11:49.113 "data_size": 63488 00:11:49.113 }, 00:11:49.113 { 00:11:49.113 "name": "BaseBdev3", 00:11:49.113 "uuid": "147fc3bb-98b3-5c53-aca8-52ee3fc909d3", 00:11:49.113 "is_configured": true, 00:11:49.113 "data_offset": 2048, 00:11:49.113 "data_size": 63488 00:11:49.113 }, 00:11:49.113 { 00:11:49.113 "name": "BaseBdev4", 00:11:49.113 "uuid": "2507ff91-92a6-5c01-8bdc-601ef73280c6", 00:11:49.113 "is_configured": true, 00:11:49.113 "data_offset": 2048, 00:11:49.113 "data_size": 63488 00:11:49.113 } 00:11:49.113 ] 00:11:49.113 }' 00:11:49.113 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.113 16:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.373 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:49.373 16:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:49.373 [2024-12-12 16:08:15.627955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:50.325 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:50.325 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:50.325 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.326 "name": "raid_bdev1", 00:11:50.326 "uuid": "1d465a85-4172-4eb8-8985-60a20277b093", 00:11:50.326 "strip_size_kb": 64, 00:11:50.326 "state": "online", 00:11:50.326 "raid_level": "concat", 00:11:50.326 "superblock": true, 00:11:50.326 "num_base_bdevs": 4, 00:11:50.326 "num_base_bdevs_discovered": 4, 00:11:50.326 "num_base_bdevs_operational": 4, 00:11:50.326 "base_bdevs_list": [ 00:11:50.326 { 00:11:50.326 "name": "BaseBdev1", 00:11:50.326 "uuid": "9b4fed3e-ed3d-5d14-830e-7a60067e3aa4", 00:11:50.326 "is_configured": true, 00:11:50.326 "data_offset": 2048, 00:11:50.326 "data_size": 63488 00:11:50.326 }, 00:11:50.326 { 00:11:50.326 "name": "BaseBdev2", 00:11:50.326 "uuid": "375c18c9-a01a-5fd5-8bcd-3c0f3564b2b7", 00:11:50.326 "is_configured": true, 00:11:50.326 "data_offset": 2048, 00:11:50.326 "data_size": 63488 00:11:50.326 }, 00:11:50.326 { 00:11:50.326 "name": "BaseBdev3", 00:11:50.326 "uuid": "147fc3bb-98b3-5c53-aca8-52ee3fc909d3", 00:11:50.326 "is_configured": true, 00:11:50.326 "data_offset": 2048, 00:11:50.326 "data_size": 63488 00:11:50.326 }, 00:11:50.326 { 00:11:50.326 "name": "BaseBdev4", 00:11:50.326 "uuid": "2507ff91-92a6-5c01-8bdc-601ef73280c6", 00:11:50.326 "is_configured": true, 00:11:50.326 "data_offset": 2048, 00:11:50.326 "data_size": 63488 00:11:50.326 } 00:11:50.326 ] 00:11:50.326 }' 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.326 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.895 [2024-12-12 16:08:16.947045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.895 [2024-12-12 16:08:16.947101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.895 [2024-12-12 16:08:16.950472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.895 [2024-12-12 16:08:16.950571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.895 [2024-12-12 16:08:16.950639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.895 [2024-12-12 16:08:16.950657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:50.895 { 00:11:50.895 "results": [ 00:11:50.895 { 00:11:50.895 "job": "raid_bdev1", 00:11:50.895 "core_mask": "0x1", 00:11:50.895 "workload": "randrw", 00:11:50.895 "percentage": 50, 00:11:50.895 "status": "finished", 00:11:50.895 "queue_depth": 1, 00:11:50.895 "io_size": 131072, 00:11:50.895 "runtime": 1.319395, 00:11:50.895 "iops": 10961.842359566317, 00:11:50.895 "mibps": 1370.2302949457896, 00:11:50.895 "io_failed": 1, 00:11:50.895 "io_timeout": 0, 00:11:50.895 "avg_latency_us": 128.04270974224215, 00:11:50.895 "min_latency_us": 29.512663755458515, 00:11:50.895 "max_latency_us": 1638.4 00:11:50.895 } 00:11:50.895 ], 00:11:50.895 "core_count": 1 00:11:50.895 } 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75077 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75077 ']' 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75077 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75077 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.895 killing process with pid 75077 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75077' 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75077 00:11:50.895 [2024-12-12 16:08:16.989718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.895 16:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75077 00:11:51.154 [2024-12-12 16:08:17.408307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xypy2dCOhr 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:11:53.063 00:11:53.063 real 0m4.945s 00:11:53.063 user 0m5.543s 00:11:53.063 sys 0m0.684s 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.063 16:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.063 ************************************ 00:11:53.063 END TEST raid_write_error_test 00:11:53.063 ************************************ 00:11:53.063 16:08:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:53.063 16:08:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:53.063 16:08:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:53.063 16:08:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.063 16:08:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.063 ************************************ 00:11:53.063 START TEST raid_state_function_test 00:11:53.063 ************************************ 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.063 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75227 00:11:53.064 16:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:53.064 Process raid pid: 75227 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75227' 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75227 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75227 ']' 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.064 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.064 [2024-12-12 16:08:19.106296] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:53.064 [2024-12-12 16:08:19.106462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.064 [2024-12-12 16:08:19.290743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.324 [2024-12-12 16:08:19.449638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.584 [2024-12-12 16:08:19.728534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.584 [2024-12-12 16:08:19.728592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.844 [2024-12-12 16:08:19.989851] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.844 [2024-12-12 16:08:19.989951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.844 [2024-12-12 16:08:19.989965] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.844 [2024-12-12 16:08:19.989978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.844 [2024-12-12 16:08:19.989986] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.844 [2024-12-12 16:08:19.989997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.844 [2024-12-12 16:08:19.990005] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.844 [2024-12-12 16:08:19.990017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.844 16:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.844 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.844 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.844 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.844 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.844 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.844 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.844 "name": "Existed_Raid", 00:11:53.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.844 "strip_size_kb": 0, 00:11:53.844 "state": "configuring", 00:11:53.844 "raid_level": "raid1", 00:11:53.844 "superblock": false, 00:11:53.844 "num_base_bdevs": 4, 00:11:53.844 "num_base_bdevs_discovered": 0, 00:11:53.844 "num_base_bdevs_operational": 4, 00:11:53.844 "base_bdevs_list": [ 00:11:53.844 { 00:11:53.844 "name": "BaseBdev1", 00:11:53.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.844 "is_configured": false, 00:11:53.844 "data_offset": 0, 00:11:53.844 "data_size": 0 00:11:53.845 }, 00:11:53.845 { 00:11:53.845 "name": "BaseBdev2", 00:11:53.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.845 "is_configured": false, 00:11:53.845 "data_offset": 0, 00:11:53.845 "data_size": 0 00:11:53.845 }, 00:11:53.845 { 00:11:53.845 "name": "BaseBdev3", 00:11:53.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.845 "is_configured": false, 00:11:53.845 "data_offset": 0, 00:11:53.845 "data_size": 0 00:11:53.845 }, 00:11:53.845 { 00:11:53.845 "name": "BaseBdev4", 00:11:53.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.845 "is_configured": false, 00:11:53.845 "data_offset": 0, 00:11:53.845 "data_size": 0 00:11:53.845 } 00:11:53.845 ] 00:11:53.845 }' 00:11:53.845 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.845 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.105 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.105 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.105 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 [2024-12-12 16:08:20.461060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.366 [2024-12-12 16:08:20.461126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 [2024-12-12 16:08:20.473037] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.366 [2024-12-12 16:08:20.473107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.366 [2024-12-12 16:08:20.473118] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.366 [2024-12-12 16:08:20.473133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.366 [2024-12-12 16:08:20.473141] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.366 [2024-12-12 16:08:20.473152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.366 [2024-12-12 16:08:20.473160] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.366 [2024-12-12 16:08:20.473171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 [2024-12-12 16:08:20.535143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.366 BaseBdev1 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.366 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 [ 00:11:54.366 { 00:11:54.366 "name": "BaseBdev1", 00:11:54.366 "aliases": [ 00:11:54.366 "a12514d2-239f-475c-a1ef-09d757c62a9d" 00:11:54.366 ], 00:11:54.366 "product_name": "Malloc disk", 00:11:54.366 "block_size": 512, 00:11:54.366 "num_blocks": 65536, 00:11:54.366 "uuid": "a12514d2-239f-475c-a1ef-09d757c62a9d", 00:11:54.366 "assigned_rate_limits": { 00:11:54.366 "rw_ios_per_sec": 0, 00:11:54.366 "rw_mbytes_per_sec": 0, 00:11:54.366 "r_mbytes_per_sec": 0, 00:11:54.366 "w_mbytes_per_sec": 0 00:11:54.366 }, 00:11:54.366 "claimed": true, 00:11:54.366 "claim_type": "exclusive_write", 00:11:54.366 "zoned": false, 00:11:54.366 "supported_io_types": { 00:11:54.366 "read": true, 00:11:54.366 "write": true, 00:11:54.366 "unmap": true, 00:11:54.366 "flush": true, 00:11:54.366 "reset": true, 00:11:54.366 "nvme_admin": false, 00:11:54.366 "nvme_io": false, 00:11:54.366 "nvme_io_md": false, 00:11:54.366 "write_zeroes": true, 00:11:54.366 "zcopy": true, 00:11:54.366 "get_zone_info": false, 00:11:54.366 "zone_management": false, 00:11:54.366 "zone_append": false, 00:11:54.366 "compare": false, 00:11:54.367 "compare_and_write": false, 00:11:54.367 "abort": true, 00:11:54.367 "seek_hole": false, 00:11:54.367 "seek_data": false, 00:11:54.367 "copy": true, 00:11:54.367 "nvme_iov_md": false 00:11:54.367 }, 00:11:54.367 "memory_domains": [ 00:11:54.367 { 00:11:54.367 "dma_device_id": "system", 00:11:54.367 "dma_device_type": 1 00:11:54.367 }, 00:11:54.367 { 00:11:54.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.367 "dma_device_type": 2 00:11:54.367 } 00:11:54.367 ], 00:11:54.367 "driver_specific": {} 00:11:54.367 } 00:11:54.367 ] 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.367 "name": "Existed_Raid", 00:11:54.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.367 "strip_size_kb": 0, 00:11:54.367 "state": "configuring", 00:11:54.367 "raid_level": "raid1", 00:11:54.367 "superblock": false, 00:11:54.367 "num_base_bdevs": 4, 00:11:54.367 "num_base_bdevs_discovered": 1, 00:11:54.367 "num_base_bdevs_operational": 4, 00:11:54.367 "base_bdevs_list": [ 00:11:54.367 { 00:11:54.367 "name": "BaseBdev1", 00:11:54.367 "uuid": "a12514d2-239f-475c-a1ef-09d757c62a9d", 00:11:54.367 "is_configured": true, 00:11:54.367 "data_offset": 0, 00:11:54.367 "data_size": 65536 00:11:54.367 }, 00:11:54.367 { 00:11:54.367 "name": "BaseBdev2", 00:11:54.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.367 "is_configured": false, 00:11:54.367 "data_offset": 0, 00:11:54.367 "data_size": 0 00:11:54.367 }, 00:11:54.367 { 00:11:54.367 "name": "BaseBdev3", 00:11:54.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.367 "is_configured": false, 00:11:54.367 "data_offset": 0, 00:11:54.367 "data_size": 0 00:11:54.367 }, 00:11:54.367 { 00:11:54.367 "name": "BaseBdev4", 00:11:54.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.367 "is_configured": false, 00:11:54.367 "data_offset": 0, 00:11:54.367 "data_size": 0 00:11:54.367 } 00:11:54.367 ] 00:11:54.367 }' 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.367 16:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 [2024-12-12 16:08:21.042420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.938 [2024-12-12 16:08:21.042515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 [2024-12-12 16:08:21.054448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.938 [2024-12-12 16:08:21.056999] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.938 [2024-12-12 16:08:21.057052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.938 [2024-12-12 16:08:21.057064] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.938 [2024-12-12 16:08:21.057076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.938 [2024-12-12 16:08:21.057084] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.938 [2024-12-12 16:08:21.057094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.938 "name": "Existed_Raid", 00:11:54.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.938 "strip_size_kb": 0, 00:11:54.938 "state": "configuring", 00:11:54.938 "raid_level": "raid1", 00:11:54.938 "superblock": false, 00:11:54.938 "num_base_bdevs": 4, 00:11:54.938 "num_base_bdevs_discovered": 1, 00:11:54.938 "num_base_bdevs_operational": 4, 00:11:54.938 "base_bdevs_list": [ 00:11:54.938 { 00:11:54.938 "name": "BaseBdev1", 00:11:54.938 "uuid": "a12514d2-239f-475c-a1ef-09d757c62a9d", 00:11:54.938 "is_configured": true, 00:11:54.938 "data_offset": 0, 00:11:54.938 "data_size": 65536 00:11:54.939 }, 00:11:54.939 { 00:11:54.939 "name": "BaseBdev2", 00:11:54.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.939 "is_configured": false, 00:11:54.939 "data_offset": 0, 00:11:54.939 "data_size": 0 00:11:54.939 }, 00:11:54.939 { 00:11:54.939 "name": "BaseBdev3", 00:11:54.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.939 "is_configured": false, 00:11:54.939 "data_offset": 0, 00:11:54.939 "data_size": 0 00:11:54.939 }, 00:11:54.939 { 00:11:54.939 "name": "BaseBdev4", 00:11:54.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.939 "is_configured": false, 00:11:54.939 "data_offset": 0, 00:11:54.939 "data_size": 0 00:11:54.939 } 00:11:54.939 ] 00:11:54.939 }' 00:11:54.939 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.939 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.199 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.199 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.199 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.459 [2024-12-12 16:08:21.588449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.459 BaseBdev2 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.459 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.459 [ 00:11:55.459 { 00:11:55.459 "name": "BaseBdev2", 00:11:55.459 "aliases": [ 00:11:55.459 "19c9f96d-265a-40ae-823d-a69a5f3bf81f" 00:11:55.459 ], 00:11:55.459 "product_name": "Malloc disk", 00:11:55.459 "block_size": 512, 00:11:55.459 "num_blocks": 65536, 00:11:55.459 "uuid": "19c9f96d-265a-40ae-823d-a69a5f3bf81f", 00:11:55.459 "assigned_rate_limits": { 00:11:55.459 "rw_ios_per_sec": 0, 00:11:55.459 "rw_mbytes_per_sec": 0, 00:11:55.459 "r_mbytes_per_sec": 0, 00:11:55.459 "w_mbytes_per_sec": 0 00:11:55.459 }, 00:11:55.459 "claimed": true, 00:11:55.460 "claim_type": "exclusive_write", 00:11:55.460 "zoned": false, 00:11:55.460 "supported_io_types": { 00:11:55.460 "read": true, 00:11:55.460 "write": true, 00:11:55.460 "unmap": true, 00:11:55.460 "flush": true, 00:11:55.460 "reset": true, 00:11:55.460 "nvme_admin": false, 00:11:55.460 "nvme_io": false, 00:11:55.460 "nvme_io_md": false, 00:11:55.460 "write_zeroes": true, 00:11:55.460 "zcopy": true, 00:11:55.460 "get_zone_info": false, 00:11:55.460 "zone_management": false, 00:11:55.460 "zone_append": false, 00:11:55.460 "compare": false, 00:11:55.460 "compare_and_write": false, 00:11:55.460 "abort": true, 00:11:55.460 "seek_hole": false, 00:11:55.460 "seek_data": false, 00:11:55.460 "copy": true, 00:11:55.460 "nvme_iov_md": false 00:11:55.460 }, 00:11:55.460 "memory_domains": [ 00:11:55.460 { 00:11:55.460 "dma_device_id": "system", 00:11:55.460 "dma_device_type": 1 00:11:55.460 }, 00:11:55.460 { 00:11:55.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.460 "dma_device_type": 2 00:11:55.460 } 00:11:55.460 ], 00:11:55.460 "driver_specific": {} 00:11:55.460 } 00:11:55.460 ] 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.460 "name": "Existed_Raid", 00:11:55.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.460 "strip_size_kb": 0, 00:11:55.460 "state": "configuring", 00:11:55.460 "raid_level": "raid1", 00:11:55.460 "superblock": false, 00:11:55.460 "num_base_bdevs": 4, 00:11:55.460 "num_base_bdevs_discovered": 2, 00:11:55.460 "num_base_bdevs_operational": 4, 00:11:55.460 "base_bdevs_list": [ 00:11:55.460 { 00:11:55.460 "name": "BaseBdev1", 00:11:55.460 "uuid": "a12514d2-239f-475c-a1ef-09d757c62a9d", 00:11:55.460 "is_configured": true, 00:11:55.460 "data_offset": 0, 00:11:55.460 "data_size": 65536 00:11:55.460 }, 00:11:55.460 { 00:11:55.460 "name": "BaseBdev2", 00:11:55.460 "uuid": "19c9f96d-265a-40ae-823d-a69a5f3bf81f", 00:11:55.460 "is_configured": true, 00:11:55.460 "data_offset": 0, 00:11:55.460 "data_size": 65536 00:11:55.460 }, 00:11:55.460 { 00:11:55.460 "name": "BaseBdev3", 00:11:55.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.460 "is_configured": false, 00:11:55.460 "data_offset": 0, 00:11:55.460 "data_size": 0 00:11:55.460 }, 00:11:55.460 { 00:11:55.460 "name": "BaseBdev4", 00:11:55.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.460 "is_configured": false, 00:11:55.460 "data_offset": 0, 00:11:55.460 "data_size": 0 00:11:55.460 } 00:11:55.460 ] 00:11:55.460 }' 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.460 16:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.031 [2024-12-12 16:08:22.155542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.031 BaseBdev3 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.031 [ 00:11:56.031 { 00:11:56.031 "name": "BaseBdev3", 00:11:56.031 "aliases": [ 00:11:56.031 "09ff6454-b638-4a88-9389-6089c7cfb5cd" 00:11:56.031 ], 00:11:56.031 "product_name": "Malloc disk", 00:11:56.031 "block_size": 512, 00:11:56.031 "num_blocks": 65536, 00:11:56.031 "uuid": "09ff6454-b638-4a88-9389-6089c7cfb5cd", 00:11:56.031 "assigned_rate_limits": { 00:11:56.031 "rw_ios_per_sec": 0, 00:11:56.031 "rw_mbytes_per_sec": 0, 00:11:56.031 "r_mbytes_per_sec": 0, 00:11:56.031 "w_mbytes_per_sec": 0 00:11:56.031 }, 00:11:56.031 "claimed": true, 00:11:56.031 "claim_type": "exclusive_write", 00:11:56.031 "zoned": false, 00:11:56.031 "supported_io_types": { 00:11:56.031 "read": true, 00:11:56.031 "write": true, 00:11:56.031 "unmap": true, 00:11:56.031 "flush": true, 00:11:56.031 "reset": true, 00:11:56.031 "nvme_admin": false, 00:11:56.031 "nvme_io": false, 00:11:56.031 "nvme_io_md": false, 00:11:56.031 "write_zeroes": true, 00:11:56.031 "zcopy": true, 00:11:56.031 "get_zone_info": false, 00:11:56.031 "zone_management": false, 00:11:56.031 "zone_append": false, 00:11:56.031 "compare": false, 00:11:56.031 "compare_and_write": false, 00:11:56.031 "abort": true, 00:11:56.031 "seek_hole": false, 00:11:56.031 "seek_data": false, 00:11:56.031 "copy": true, 00:11:56.031 "nvme_iov_md": false 00:11:56.031 }, 00:11:56.031 "memory_domains": [ 00:11:56.031 { 00:11:56.031 "dma_device_id": "system", 00:11:56.031 "dma_device_type": 1 00:11:56.031 }, 00:11:56.031 { 00:11:56.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.031 "dma_device_type": 2 00:11:56.031 } 00:11:56.031 ], 00:11:56.031 "driver_specific": {} 00:11:56.031 } 00:11:56.031 ] 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.031 "name": "Existed_Raid", 00:11:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.031 "strip_size_kb": 0, 00:11:56.031 "state": "configuring", 00:11:56.031 "raid_level": "raid1", 00:11:56.031 "superblock": false, 00:11:56.031 "num_base_bdevs": 4, 00:11:56.031 "num_base_bdevs_discovered": 3, 00:11:56.031 "num_base_bdevs_operational": 4, 00:11:56.031 "base_bdevs_list": [ 00:11:56.031 { 00:11:56.031 "name": "BaseBdev1", 00:11:56.031 "uuid": "a12514d2-239f-475c-a1ef-09d757c62a9d", 00:11:56.031 "is_configured": true, 00:11:56.031 "data_offset": 0, 00:11:56.031 "data_size": 65536 00:11:56.031 }, 00:11:56.031 { 00:11:56.031 "name": "BaseBdev2", 00:11:56.031 "uuid": "19c9f96d-265a-40ae-823d-a69a5f3bf81f", 00:11:56.031 "is_configured": true, 00:11:56.031 "data_offset": 0, 00:11:56.031 "data_size": 65536 00:11:56.031 }, 00:11:56.031 { 00:11:56.031 "name": "BaseBdev3", 00:11:56.031 "uuid": "09ff6454-b638-4a88-9389-6089c7cfb5cd", 00:11:56.031 "is_configured": true, 00:11:56.031 "data_offset": 0, 00:11:56.031 "data_size": 65536 00:11:56.031 }, 00:11:56.031 { 00:11:56.031 "name": "BaseBdev4", 00:11:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.031 "is_configured": false, 00:11:56.031 "data_offset": 0, 00:11:56.031 "data_size": 0 00:11:56.031 } 00:11:56.031 ] 00:11:56.031 }' 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.031 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:56.601 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.601 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 [2024-12-12 16:08:22.721930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.601 [2024-12-12 16:08:22.722009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:56.601 [2024-12-12 16:08:22.722019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:56.601 [2024-12-12 16:08:22.722385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:56.601 [2024-12-12 16:08:22.722629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:56.601 [2024-12-12 16:08:22.722653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:56.602 [2024-12-12 16:08:22.723011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.602 BaseBdev4 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.602 [ 00:11:56.602 { 00:11:56.602 "name": "BaseBdev4", 00:11:56.602 "aliases": [ 00:11:56.602 "323be6c9-8b92-42f0-9a21-2ce9b6b7df5d" 00:11:56.602 ], 00:11:56.602 "product_name": "Malloc disk", 00:11:56.602 "block_size": 512, 00:11:56.602 "num_blocks": 65536, 00:11:56.602 "uuid": "323be6c9-8b92-42f0-9a21-2ce9b6b7df5d", 00:11:56.602 "assigned_rate_limits": { 00:11:56.602 "rw_ios_per_sec": 0, 00:11:56.602 "rw_mbytes_per_sec": 0, 00:11:56.602 "r_mbytes_per_sec": 0, 00:11:56.602 "w_mbytes_per_sec": 0 00:11:56.602 }, 00:11:56.602 "claimed": true, 00:11:56.602 "claim_type": "exclusive_write", 00:11:56.602 "zoned": false, 00:11:56.602 "supported_io_types": { 00:11:56.602 "read": true, 00:11:56.602 "write": true, 00:11:56.602 "unmap": true, 00:11:56.602 "flush": true, 00:11:56.602 "reset": true, 00:11:56.602 "nvme_admin": false, 00:11:56.602 "nvme_io": false, 00:11:56.602 "nvme_io_md": false, 00:11:56.602 "write_zeroes": true, 00:11:56.602 "zcopy": true, 00:11:56.602 "get_zone_info": false, 00:11:56.602 "zone_management": false, 00:11:56.602 "zone_append": false, 00:11:56.602 "compare": false, 00:11:56.602 "compare_and_write": false, 00:11:56.602 "abort": true, 00:11:56.602 "seek_hole": false, 00:11:56.602 "seek_data": false, 00:11:56.602 "copy": true, 00:11:56.602 "nvme_iov_md": false 00:11:56.602 }, 00:11:56.602 "memory_domains": [ 00:11:56.602 { 00:11:56.602 "dma_device_id": "system", 00:11:56.602 "dma_device_type": 1 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.602 "dma_device_type": 2 00:11:56.602 } 00:11:56.602 ], 00:11:56.602 "driver_specific": {} 00:11:56.602 } 00:11:56.602 ] 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.602 "name": "Existed_Raid", 00:11:56.602 "uuid": "19507718-2ef3-4b1b-ac7c-f1243b10cd88", 00:11:56.602 "strip_size_kb": 0, 00:11:56.602 "state": "online", 00:11:56.602 "raid_level": "raid1", 00:11:56.602 "superblock": false, 00:11:56.602 "num_base_bdevs": 4, 00:11:56.602 "num_base_bdevs_discovered": 4, 00:11:56.602 "num_base_bdevs_operational": 4, 00:11:56.602 "base_bdevs_list": [ 00:11:56.602 { 00:11:56.602 "name": "BaseBdev1", 00:11:56.602 "uuid": "a12514d2-239f-475c-a1ef-09d757c62a9d", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "name": "BaseBdev2", 00:11:56.602 "uuid": "19c9f96d-265a-40ae-823d-a69a5f3bf81f", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "name": "BaseBdev3", 00:11:56.602 "uuid": "09ff6454-b638-4a88-9389-6089c7cfb5cd", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "name": "BaseBdev4", 00:11:56.602 "uuid": "323be6c9-8b92-42f0-9a21-2ce9b6b7df5d", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 } 00:11:56.602 ] 00:11:56.602 }' 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.602 16:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.862 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.862 [2024-12-12 16:08:23.201677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.122 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.122 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.122 "name": "Existed_Raid", 00:11:57.122 "aliases": [ 00:11:57.122 "19507718-2ef3-4b1b-ac7c-f1243b10cd88" 00:11:57.122 ], 00:11:57.122 "product_name": "Raid Volume", 00:11:57.122 "block_size": 512, 00:11:57.122 "num_blocks": 65536, 00:11:57.122 "uuid": "19507718-2ef3-4b1b-ac7c-f1243b10cd88", 00:11:57.122 "assigned_rate_limits": { 00:11:57.122 "rw_ios_per_sec": 0, 00:11:57.122 "rw_mbytes_per_sec": 0, 00:11:57.122 "r_mbytes_per_sec": 0, 00:11:57.122 "w_mbytes_per_sec": 0 00:11:57.122 }, 00:11:57.122 "claimed": false, 00:11:57.122 "zoned": false, 00:11:57.122 "supported_io_types": { 00:11:57.122 "read": true, 00:11:57.122 "write": true, 00:11:57.122 "unmap": false, 00:11:57.122 "flush": false, 00:11:57.122 "reset": true, 00:11:57.122 "nvme_admin": false, 00:11:57.122 "nvme_io": false, 00:11:57.122 "nvme_io_md": false, 00:11:57.122 "write_zeroes": true, 00:11:57.122 "zcopy": false, 00:11:57.122 "get_zone_info": false, 00:11:57.122 "zone_management": false, 00:11:57.122 "zone_append": false, 00:11:57.122 "compare": false, 00:11:57.122 "compare_and_write": false, 00:11:57.122 "abort": false, 00:11:57.122 "seek_hole": false, 00:11:57.122 "seek_data": false, 00:11:57.122 "copy": false, 00:11:57.122 "nvme_iov_md": false 00:11:57.122 }, 00:11:57.122 "memory_domains": [ 00:11:57.122 { 00:11:57.122 "dma_device_id": "system", 00:11:57.122 "dma_device_type": 1 00:11:57.122 }, 00:11:57.122 { 00:11:57.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.122 "dma_device_type": 2 00:11:57.122 }, 00:11:57.122 { 00:11:57.122 "dma_device_id": "system", 00:11:57.122 "dma_device_type": 1 00:11:57.122 }, 00:11:57.122 { 00:11:57.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.122 "dma_device_type": 2 00:11:57.122 }, 00:11:57.122 { 00:11:57.122 "dma_device_id": "system", 00:11:57.122 "dma_device_type": 1 00:11:57.122 }, 00:11:57.122 { 00:11:57.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.122 "dma_device_type": 2 00:11:57.122 }, 00:11:57.122 { 00:11:57.122 "dma_device_id": "system", 00:11:57.122 "dma_device_type": 1 00:11:57.122 }, 00:11:57.122 { 00:11:57.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.123 "dma_device_type": 2 00:11:57.123 } 00:11:57.123 ], 00:11:57.123 "driver_specific": { 00:11:57.123 "raid": { 00:11:57.123 "uuid": "19507718-2ef3-4b1b-ac7c-f1243b10cd88", 00:11:57.123 "strip_size_kb": 0, 00:11:57.123 "state": "online", 00:11:57.123 "raid_level": "raid1", 00:11:57.123 "superblock": false, 00:11:57.123 "num_base_bdevs": 4, 00:11:57.123 "num_base_bdevs_discovered": 4, 00:11:57.123 "num_base_bdevs_operational": 4, 00:11:57.123 "base_bdevs_list": [ 00:11:57.123 { 00:11:57.123 "name": "BaseBdev1", 00:11:57.123 "uuid": "a12514d2-239f-475c-a1ef-09d757c62a9d", 00:11:57.123 "is_configured": true, 00:11:57.123 "data_offset": 0, 00:11:57.123 "data_size": 65536 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "name": "BaseBdev2", 00:11:57.123 "uuid": "19c9f96d-265a-40ae-823d-a69a5f3bf81f", 00:11:57.123 "is_configured": true, 00:11:57.123 "data_offset": 0, 00:11:57.123 "data_size": 65536 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "name": "BaseBdev3", 00:11:57.123 "uuid": "09ff6454-b638-4a88-9389-6089c7cfb5cd", 00:11:57.123 "is_configured": true, 00:11:57.123 "data_offset": 0, 00:11:57.123 "data_size": 65536 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "name": "BaseBdev4", 00:11:57.123 "uuid": "323be6c9-8b92-42f0-9a21-2ce9b6b7df5d", 00:11:57.123 "is_configured": true, 00:11:57.123 "data_offset": 0, 00:11:57.123 "data_size": 65536 00:11:57.123 } 00:11:57.123 ] 00:11:57.123 } 00:11:57.123 } 00:11:57.123 }' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:57.123 BaseBdev2 00:11:57.123 BaseBdev3 00:11:57.123 BaseBdev4' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.383 [2024-12-12 16:08:23.524798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.383 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.384 "name": "Existed_Raid", 00:11:57.384 "uuid": "19507718-2ef3-4b1b-ac7c-f1243b10cd88", 00:11:57.384 "strip_size_kb": 0, 00:11:57.384 "state": "online", 00:11:57.384 "raid_level": "raid1", 00:11:57.384 "superblock": false, 00:11:57.384 "num_base_bdevs": 4, 00:11:57.384 "num_base_bdevs_discovered": 3, 00:11:57.384 "num_base_bdevs_operational": 3, 00:11:57.384 "base_bdevs_list": [ 00:11:57.384 { 00:11:57.384 "name": null, 00:11:57.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.384 "is_configured": false, 00:11:57.384 "data_offset": 0, 00:11:57.384 "data_size": 65536 00:11:57.384 }, 00:11:57.384 { 00:11:57.384 "name": "BaseBdev2", 00:11:57.384 "uuid": "19c9f96d-265a-40ae-823d-a69a5f3bf81f", 00:11:57.384 "is_configured": true, 00:11:57.384 "data_offset": 0, 00:11:57.384 "data_size": 65536 00:11:57.384 }, 00:11:57.384 { 00:11:57.384 "name": "BaseBdev3", 00:11:57.384 "uuid": "09ff6454-b638-4a88-9389-6089c7cfb5cd", 00:11:57.384 "is_configured": true, 00:11:57.384 "data_offset": 0, 00:11:57.384 "data_size": 65536 00:11:57.384 }, 00:11:57.384 { 00:11:57.384 "name": "BaseBdev4", 00:11:57.384 "uuid": "323be6c9-8b92-42f0-9a21-2ce9b6b7df5d", 00:11:57.384 "is_configured": true, 00:11:57.384 "data_offset": 0, 00:11:57.384 "data_size": 65536 00:11:57.384 } 00:11:57.384 ] 00:11:57.384 }' 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.384 16:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.953 [2024-12-12 16:08:24.166958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.953 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.954 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.954 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.954 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.954 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.214 [2024-12-12 16:08:24.352512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.214 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.214 [2024-12-12 16:08:24.519682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:58.214 [2024-12-12 16:08:24.519827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.473 [2024-12-12 16:08:24.650042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.473 [2024-12-12 16:08:24.650127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.473 [2024-12-12 16:08:24.650145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.473 BaseBdev2 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.473 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.473 [ 00:11:58.473 { 00:11:58.473 "name": "BaseBdev2", 00:11:58.473 "aliases": [ 00:11:58.473 "072c9471-dd99-4ad8-853b-370b4d8da9a8" 00:11:58.473 ], 00:11:58.473 "product_name": "Malloc disk", 00:11:58.473 "block_size": 512, 00:11:58.473 "num_blocks": 65536, 00:11:58.474 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:11:58.474 "assigned_rate_limits": { 00:11:58.474 "rw_ios_per_sec": 0, 00:11:58.474 "rw_mbytes_per_sec": 0, 00:11:58.474 "r_mbytes_per_sec": 0, 00:11:58.474 "w_mbytes_per_sec": 0 00:11:58.474 }, 00:11:58.474 "claimed": false, 00:11:58.474 "zoned": false, 00:11:58.474 "supported_io_types": { 00:11:58.474 "read": true, 00:11:58.474 "write": true, 00:11:58.474 "unmap": true, 00:11:58.474 "flush": true, 00:11:58.474 "reset": true, 00:11:58.474 "nvme_admin": false, 00:11:58.474 "nvme_io": false, 00:11:58.474 "nvme_io_md": false, 00:11:58.474 "write_zeroes": true, 00:11:58.474 "zcopy": true, 00:11:58.474 "get_zone_info": false, 00:11:58.474 "zone_management": false, 00:11:58.474 "zone_append": false, 00:11:58.474 "compare": false, 00:11:58.474 "compare_and_write": false, 00:11:58.474 "abort": true, 00:11:58.474 "seek_hole": false, 00:11:58.474 "seek_data": false, 00:11:58.474 "copy": true, 00:11:58.474 "nvme_iov_md": false 00:11:58.474 }, 00:11:58.474 "memory_domains": [ 00:11:58.474 { 00:11:58.474 "dma_device_id": "system", 00:11:58.474 "dma_device_type": 1 00:11:58.474 }, 00:11:58.474 { 00:11:58.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.474 "dma_device_type": 2 00:11:58.474 } 00:11:58.474 ], 00:11:58.474 "driver_specific": {} 00:11:58.474 } 00:11:58.474 ] 00:11:58.474 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.474 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.474 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.474 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.474 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.474 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.474 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 BaseBdev3 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 [ 00:11:58.734 { 00:11:58.734 "name": "BaseBdev3", 00:11:58.734 "aliases": [ 00:11:58.734 "616231c7-b601-4077-8ab9-a6500cd2cbd3" 00:11:58.734 ], 00:11:58.734 "product_name": "Malloc disk", 00:11:58.734 "block_size": 512, 00:11:58.734 "num_blocks": 65536, 00:11:58.734 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:11:58.734 "assigned_rate_limits": { 00:11:58.734 "rw_ios_per_sec": 0, 00:11:58.734 "rw_mbytes_per_sec": 0, 00:11:58.734 "r_mbytes_per_sec": 0, 00:11:58.734 "w_mbytes_per_sec": 0 00:11:58.734 }, 00:11:58.734 "claimed": false, 00:11:58.734 "zoned": false, 00:11:58.734 "supported_io_types": { 00:11:58.734 "read": true, 00:11:58.734 "write": true, 00:11:58.734 "unmap": true, 00:11:58.734 "flush": true, 00:11:58.734 "reset": true, 00:11:58.734 "nvme_admin": false, 00:11:58.734 "nvme_io": false, 00:11:58.734 "nvme_io_md": false, 00:11:58.734 "write_zeroes": true, 00:11:58.734 "zcopy": true, 00:11:58.734 "get_zone_info": false, 00:11:58.734 "zone_management": false, 00:11:58.734 "zone_append": false, 00:11:58.734 "compare": false, 00:11:58.734 "compare_and_write": false, 00:11:58.734 "abort": true, 00:11:58.734 "seek_hole": false, 00:11:58.734 "seek_data": false, 00:11:58.734 "copy": true, 00:11:58.734 "nvme_iov_md": false 00:11:58.734 }, 00:11:58.734 "memory_domains": [ 00:11:58.734 { 00:11:58.734 "dma_device_id": "system", 00:11:58.734 "dma_device_type": 1 00:11:58.734 }, 00:11:58.734 { 00:11:58.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.734 "dma_device_type": 2 00:11:58.734 } 00:11:58.734 ], 00:11:58.734 "driver_specific": {} 00:11:58.734 } 00:11:58.734 ] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 BaseBdev4 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 [ 00:11:58.734 { 00:11:58.734 "name": "BaseBdev4", 00:11:58.734 "aliases": [ 00:11:58.734 "875df570-5c68-4b10-ae73-8b06ae79e8d6" 00:11:58.734 ], 00:11:58.734 "product_name": "Malloc disk", 00:11:58.734 "block_size": 512, 00:11:58.734 "num_blocks": 65536, 00:11:58.734 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:11:58.734 "assigned_rate_limits": { 00:11:58.734 "rw_ios_per_sec": 0, 00:11:58.734 "rw_mbytes_per_sec": 0, 00:11:58.734 "r_mbytes_per_sec": 0, 00:11:58.734 "w_mbytes_per_sec": 0 00:11:58.734 }, 00:11:58.734 "claimed": false, 00:11:58.734 "zoned": false, 00:11:58.734 "supported_io_types": { 00:11:58.734 "read": true, 00:11:58.734 "write": true, 00:11:58.734 "unmap": true, 00:11:58.734 "flush": true, 00:11:58.734 "reset": true, 00:11:58.734 "nvme_admin": false, 00:11:58.734 "nvme_io": false, 00:11:58.734 "nvme_io_md": false, 00:11:58.735 "write_zeroes": true, 00:11:58.735 "zcopy": true, 00:11:58.735 "get_zone_info": false, 00:11:58.735 "zone_management": false, 00:11:58.735 "zone_append": false, 00:11:58.735 "compare": false, 00:11:58.735 "compare_and_write": false, 00:11:58.735 "abort": true, 00:11:58.735 "seek_hole": false, 00:11:58.735 "seek_data": false, 00:11:58.735 "copy": true, 00:11:58.735 "nvme_iov_md": false 00:11:58.735 }, 00:11:58.735 "memory_domains": [ 00:11:58.735 { 00:11:58.735 "dma_device_id": "system", 00:11:58.735 "dma_device_type": 1 00:11:58.735 }, 00:11:58.735 { 00:11:58.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.735 "dma_device_type": 2 00:11:58.735 } 00:11:58.735 ], 00:11:58.735 "driver_specific": {} 00:11:58.735 } 00:11:58.735 ] 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.735 [2024-12-12 16:08:24.986758] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.735 [2024-12-12 16:08:24.986841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.735 [2024-12-12 16:08:24.986875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.735 [2024-12-12 16:08:24.989487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.735 [2024-12-12 16:08:24.989544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.735 16:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.735 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.735 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.735 "name": "Existed_Raid", 00:11:58.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.735 "strip_size_kb": 0, 00:11:58.735 "state": "configuring", 00:11:58.735 "raid_level": "raid1", 00:11:58.735 "superblock": false, 00:11:58.735 "num_base_bdevs": 4, 00:11:58.735 "num_base_bdevs_discovered": 3, 00:11:58.735 "num_base_bdevs_operational": 4, 00:11:58.735 "base_bdevs_list": [ 00:11:58.735 { 00:11:58.735 "name": "BaseBdev1", 00:11:58.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.735 "is_configured": false, 00:11:58.735 "data_offset": 0, 00:11:58.735 "data_size": 0 00:11:58.735 }, 00:11:58.735 { 00:11:58.735 "name": "BaseBdev2", 00:11:58.735 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:11:58.735 "is_configured": true, 00:11:58.735 "data_offset": 0, 00:11:58.735 "data_size": 65536 00:11:58.735 }, 00:11:58.735 { 00:11:58.735 "name": "BaseBdev3", 00:11:58.735 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:11:58.735 "is_configured": true, 00:11:58.735 "data_offset": 0, 00:11:58.735 "data_size": 65536 00:11:58.735 }, 00:11:58.735 { 00:11:58.735 "name": "BaseBdev4", 00:11:58.735 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:11:58.735 "is_configured": true, 00:11:58.735 "data_offset": 0, 00:11:58.735 "data_size": 65536 00:11:58.735 } 00:11:58.735 ] 00:11:58.735 }' 00:11:58.735 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.735 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.303 [2024-12-12 16:08:25.458079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.303 "name": "Existed_Raid", 00:11:59.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.303 "strip_size_kb": 0, 00:11:59.303 "state": "configuring", 00:11:59.303 "raid_level": "raid1", 00:11:59.303 "superblock": false, 00:11:59.303 "num_base_bdevs": 4, 00:11:59.303 "num_base_bdevs_discovered": 2, 00:11:59.303 "num_base_bdevs_operational": 4, 00:11:59.303 "base_bdevs_list": [ 00:11:59.303 { 00:11:59.303 "name": "BaseBdev1", 00:11:59.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.303 "is_configured": false, 00:11:59.303 "data_offset": 0, 00:11:59.303 "data_size": 0 00:11:59.303 }, 00:11:59.303 { 00:11:59.303 "name": null, 00:11:59.303 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:11:59.303 "is_configured": false, 00:11:59.303 "data_offset": 0, 00:11:59.303 "data_size": 65536 00:11:59.303 }, 00:11:59.303 { 00:11:59.303 "name": "BaseBdev3", 00:11:59.303 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:11:59.303 "is_configured": true, 00:11:59.303 "data_offset": 0, 00:11:59.303 "data_size": 65536 00:11:59.303 }, 00:11:59.303 { 00:11:59.303 "name": "BaseBdev4", 00:11:59.303 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:11:59.303 "is_configured": true, 00:11:59.303 "data_offset": 0, 00:11:59.303 "data_size": 65536 00:11:59.303 } 00:11:59.303 ] 00:11:59.303 }' 00:11:59.303 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.304 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.873 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.874 16:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.874 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.874 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 16:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 [2024-12-12 16:08:26.071558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.874 BaseBdev1 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 [ 00:11:59.874 { 00:11:59.874 "name": "BaseBdev1", 00:11:59.874 "aliases": [ 00:11:59.874 "88ca7188-74ec-4f42-9ec4-3bc408abde76" 00:11:59.874 ], 00:11:59.874 "product_name": "Malloc disk", 00:11:59.874 "block_size": 512, 00:11:59.874 "num_blocks": 65536, 00:11:59.874 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:11:59.874 "assigned_rate_limits": { 00:11:59.874 "rw_ios_per_sec": 0, 00:11:59.874 "rw_mbytes_per_sec": 0, 00:11:59.874 "r_mbytes_per_sec": 0, 00:11:59.874 "w_mbytes_per_sec": 0 00:11:59.874 }, 00:11:59.874 "claimed": true, 00:11:59.874 "claim_type": "exclusive_write", 00:11:59.874 "zoned": false, 00:11:59.874 "supported_io_types": { 00:11:59.874 "read": true, 00:11:59.874 "write": true, 00:11:59.874 "unmap": true, 00:11:59.874 "flush": true, 00:11:59.874 "reset": true, 00:11:59.874 "nvme_admin": false, 00:11:59.874 "nvme_io": false, 00:11:59.874 "nvme_io_md": false, 00:11:59.874 "write_zeroes": true, 00:11:59.874 "zcopy": true, 00:11:59.874 "get_zone_info": false, 00:11:59.874 "zone_management": false, 00:11:59.874 "zone_append": false, 00:11:59.874 "compare": false, 00:11:59.874 "compare_and_write": false, 00:11:59.874 "abort": true, 00:11:59.874 "seek_hole": false, 00:11:59.874 "seek_data": false, 00:11:59.874 "copy": true, 00:11:59.874 "nvme_iov_md": false 00:11:59.874 }, 00:11:59.874 "memory_domains": [ 00:11:59.874 { 00:11:59.874 "dma_device_id": "system", 00:11:59.874 "dma_device_type": 1 00:11:59.874 }, 00:11:59.874 { 00:11:59.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.874 "dma_device_type": 2 00:11:59.874 } 00:11:59.874 ], 00:11:59.874 "driver_specific": {} 00:11:59.874 } 00:11:59.874 ] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.874 "name": "Existed_Raid", 00:11:59.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.874 "strip_size_kb": 0, 00:11:59.874 "state": "configuring", 00:11:59.874 "raid_level": "raid1", 00:11:59.874 "superblock": false, 00:11:59.874 "num_base_bdevs": 4, 00:11:59.874 "num_base_bdevs_discovered": 3, 00:11:59.874 "num_base_bdevs_operational": 4, 00:11:59.874 "base_bdevs_list": [ 00:11:59.874 { 00:11:59.874 "name": "BaseBdev1", 00:11:59.874 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:11:59.874 "is_configured": true, 00:11:59.874 "data_offset": 0, 00:11:59.874 "data_size": 65536 00:11:59.874 }, 00:11:59.874 { 00:11:59.874 "name": null, 00:11:59.874 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:11:59.874 "is_configured": false, 00:11:59.874 "data_offset": 0, 00:11:59.874 "data_size": 65536 00:11:59.874 }, 00:11:59.874 { 00:11:59.874 "name": "BaseBdev3", 00:11:59.874 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:11:59.874 "is_configured": true, 00:11:59.874 "data_offset": 0, 00:11:59.874 "data_size": 65536 00:11:59.874 }, 00:11:59.874 { 00:11:59.874 "name": "BaseBdev4", 00:11:59.874 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:11:59.874 "is_configured": true, 00:11:59.874 "data_offset": 0, 00:11:59.874 "data_size": 65536 00:11:59.874 } 00:11:59.874 ] 00:11:59.874 }' 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.874 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.446 [2024-12-12 16:08:26.622811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.446 "name": "Existed_Raid", 00:12:00.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.446 "strip_size_kb": 0, 00:12:00.446 "state": "configuring", 00:12:00.446 "raid_level": "raid1", 00:12:00.446 "superblock": false, 00:12:00.446 "num_base_bdevs": 4, 00:12:00.446 "num_base_bdevs_discovered": 2, 00:12:00.446 "num_base_bdevs_operational": 4, 00:12:00.446 "base_bdevs_list": [ 00:12:00.446 { 00:12:00.446 "name": "BaseBdev1", 00:12:00.446 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:12:00.446 "is_configured": true, 00:12:00.446 "data_offset": 0, 00:12:00.446 "data_size": 65536 00:12:00.446 }, 00:12:00.446 { 00:12:00.446 "name": null, 00:12:00.446 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:12:00.446 "is_configured": false, 00:12:00.446 "data_offset": 0, 00:12:00.446 "data_size": 65536 00:12:00.446 }, 00:12:00.446 { 00:12:00.446 "name": null, 00:12:00.446 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:12:00.446 "is_configured": false, 00:12:00.446 "data_offset": 0, 00:12:00.446 "data_size": 65536 00:12:00.446 }, 00:12:00.446 { 00:12:00.446 "name": "BaseBdev4", 00:12:00.446 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:12:00.446 "is_configured": true, 00:12:00.446 "data_offset": 0, 00:12:00.446 "data_size": 65536 00:12:00.446 } 00:12:00.446 ] 00:12:00.446 }' 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.446 16:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.088 [2024-12-12 16:08:27.121998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.088 "name": "Existed_Raid", 00:12:01.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.088 "strip_size_kb": 0, 00:12:01.088 "state": "configuring", 00:12:01.088 "raid_level": "raid1", 00:12:01.088 "superblock": false, 00:12:01.088 "num_base_bdevs": 4, 00:12:01.088 "num_base_bdevs_discovered": 3, 00:12:01.088 "num_base_bdevs_operational": 4, 00:12:01.088 "base_bdevs_list": [ 00:12:01.088 { 00:12:01.088 "name": "BaseBdev1", 00:12:01.088 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:12:01.088 "is_configured": true, 00:12:01.088 "data_offset": 0, 00:12:01.088 "data_size": 65536 00:12:01.088 }, 00:12:01.088 { 00:12:01.088 "name": null, 00:12:01.088 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:12:01.088 "is_configured": false, 00:12:01.088 "data_offset": 0, 00:12:01.088 "data_size": 65536 00:12:01.088 }, 00:12:01.088 { 00:12:01.088 "name": "BaseBdev3", 00:12:01.088 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:12:01.088 "is_configured": true, 00:12:01.088 "data_offset": 0, 00:12:01.088 "data_size": 65536 00:12:01.088 }, 00:12:01.088 { 00:12:01.088 "name": "BaseBdev4", 00:12:01.088 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:12:01.088 "is_configured": true, 00:12:01.088 "data_offset": 0, 00:12:01.088 "data_size": 65536 00:12:01.088 } 00:12:01.088 ] 00:12:01.088 }' 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.088 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.349 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.349 [2024-12-12 16:08:27.673159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.609 "name": "Existed_Raid", 00:12:01.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.609 "strip_size_kb": 0, 00:12:01.609 "state": "configuring", 00:12:01.609 "raid_level": "raid1", 00:12:01.609 "superblock": false, 00:12:01.609 "num_base_bdevs": 4, 00:12:01.609 "num_base_bdevs_discovered": 2, 00:12:01.609 "num_base_bdevs_operational": 4, 00:12:01.609 "base_bdevs_list": [ 00:12:01.609 { 00:12:01.609 "name": null, 00:12:01.609 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:12:01.609 "is_configured": false, 00:12:01.609 "data_offset": 0, 00:12:01.609 "data_size": 65536 00:12:01.609 }, 00:12:01.609 { 00:12:01.609 "name": null, 00:12:01.609 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:12:01.609 "is_configured": false, 00:12:01.609 "data_offset": 0, 00:12:01.609 "data_size": 65536 00:12:01.609 }, 00:12:01.609 { 00:12:01.609 "name": "BaseBdev3", 00:12:01.609 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:12:01.609 "is_configured": true, 00:12:01.609 "data_offset": 0, 00:12:01.609 "data_size": 65536 00:12:01.609 }, 00:12:01.609 { 00:12:01.609 "name": "BaseBdev4", 00:12:01.609 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:12:01.609 "is_configured": true, 00:12:01.609 "data_offset": 0, 00:12:01.609 "data_size": 65536 00:12:01.609 } 00:12:01.609 ] 00:12:01.609 }' 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.609 16:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.180 [2024-12-12 16:08:28.304213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.180 "name": "Existed_Raid", 00:12:02.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.180 "strip_size_kb": 0, 00:12:02.180 "state": "configuring", 00:12:02.180 "raid_level": "raid1", 00:12:02.180 "superblock": false, 00:12:02.180 "num_base_bdevs": 4, 00:12:02.180 "num_base_bdevs_discovered": 3, 00:12:02.180 "num_base_bdevs_operational": 4, 00:12:02.180 "base_bdevs_list": [ 00:12:02.180 { 00:12:02.180 "name": null, 00:12:02.180 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:12:02.180 "is_configured": false, 00:12:02.180 "data_offset": 0, 00:12:02.180 "data_size": 65536 00:12:02.180 }, 00:12:02.180 { 00:12:02.180 "name": "BaseBdev2", 00:12:02.180 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:12:02.180 "is_configured": true, 00:12:02.180 "data_offset": 0, 00:12:02.180 "data_size": 65536 00:12:02.180 }, 00:12:02.180 { 00:12:02.180 "name": "BaseBdev3", 00:12:02.180 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:12:02.180 "is_configured": true, 00:12:02.180 "data_offset": 0, 00:12:02.180 "data_size": 65536 00:12:02.180 }, 00:12:02.180 { 00:12:02.180 "name": "BaseBdev4", 00:12:02.180 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:12:02.180 "is_configured": true, 00:12:02.180 "data_offset": 0, 00:12:02.180 "data_size": 65536 00:12:02.180 } 00:12:02.180 ] 00:12:02.180 }' 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.180 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.439 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88ca7188-74ec-4f42-9ec4-3bc408abde76 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 [2024-12-12 16:08:28.902992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:02.699 [2024-12-12 16:08:28.903087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:02.699 [2024-12-12 16:08:28.903102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:02.699 [2024-12-12 16:08:28.903443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.699 [2024-12-12 16:08:28.903663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:02.699 [2024-12-12 16:08:28.903677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:02.699 [2024-12-12 16:08:28.904048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.699 NewBaseBdev 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:02.699 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.700 [ 00:12:02.700 { 00:12:02.700 "name": "NewBaseBdev", 00:12:02.700 "aliases": [ 00:12:02.700 "88ca7188-74ec-4f42-9ec4-3bc408abde76" 00:12:02.700 ], 00:12:02.700 "product_name": "Malloc disk", 00:12:02.700 "block_size": 512, 00:12:02.700 "num_blocks": 65536, 00:12:02.700 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:12:02.700 "assigned_rate_limits": { 00:12:02.700 "rw_ios_per_sec": 0, 00:12:02.700 "rw_mbytes_per_sec": 0, 00:12:02.700 "r_mbytes_per_sec": 0, 00:12:02.700 "w_mbytes_per_sec": 0 00:12:02.700 }, 00:12:02.700 "claimed": true, 00:12:02.700 "claim_type": "exclusive_write", 00:12:02.700 "zoned": false, 00:12:02.700 "supported_io_types": { 00:12:02.700 "read": true, 00:12:02.700 "write": true, 00:12:02.700 "unmap": true, 00:12:02.700 "flush": true, 00:12:02.700 "reset": true, 00:12:02.700 "nvme_admin": false, 00:12:02.700 "nvme_io": false, 00:12:02.700 "nvme_io_md": false, 00:12:02.700 "write_zeroes": true, 00:12:02.700 "zcopy": true, 00:12:02.700 "get_zone_info": false, 00:12:02.700 "zone_management": false, 00:12:02.700 "zone_append": false, 00:12:02.700 "compare": false, 00:12:02.700 "compare_and_write": false, 00:12:02.700 "abort": true, 00:12:02.700 "seek_hole": false, 00:12:02.700 "seek_data": false, 00:12:02.700 "copy": true, 00:12:02.700 "nvme_iov_md": false 00:12:02.700 }, 00:12:02.700 "memory_domains": [ 00:12:02.700 { 00:12:02.700 "dma_device_id": "system", 00:12:02.700 "dma_device_type": 1 00:12:02.700 }, 00:12:02.700 { 00:12:02.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.700 "dma_device_type": 2 00:12:02.700 } 00:12:02.700 ], 00:12:02.700 "driver_specific": {} 00:12:02.700 } 00:12:02.700 ] 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.700 "name": "Existed_Raid", 00:12:02.700 "uuid": "33ccc80c-9783-452a-bfe0-f3a6852188c6", 00:12:02.700 "strip_size_kb": 0, 00:12:02.700 "state": "online", 00:12:02.700 "raid_level": "raid1", 00:12:02.700 "superblock": false, 00:12:02.700 "num_base_bdevs": 4, 00:12:02.700 "num_base_bdevs_discovered": 4, 00:12:02.700 "num_base_bdevs_operational": 4, 00:12:02.700 "base_bdevs_list": [ 00:12:02.700 { 00:12:02.700 "name": "NewBaseBdev", 00:12:02.700 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:12:02.700 "is_configured": true, 00:12:02.700 "data_offset": 0, 00:12:02.700 "data_size": 65536 00:12:02.700 }, 00:12:02.700 { 00:12:02.700 "name": "BaseBdev2", 00:12:02.700 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:12:02.700 "is_configured": true, 00:12:02.700 "data_offset": 0, 00:12:02.700 "data_size": 65536 00:12:02.700 }, 00:12:02.700 { 00:12:02.700 "name": "BaseBdev3", 00:12:02.700 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:12:02.700 "is_configured": true, 00:12:02.700 "data_offset": 0, 00:12:02.700 "data_size": 65536 00:12:02.700 }, 00:12:02.700 { 00:12:02.700 "name": "BaseBdev4", 00:12:02.700 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:12:02.700 "is_configured": true, 00:12:02.700 "data_offset": 0, 00:12:02.700 "data_size": 65536 00:12:02.700 } 00:12:02.700 ] 00:12:02.700 }' 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.700 16:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.269 [2024-12-12 16:08:29.423214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.269 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.269 "name": "Existed_Raid", 00:12:03.269 "aliases": [ 00:12:03.269 "33ccc80c-9783-452a-bfe0-f3a6852188c6" 00:12:03.269 ], 00:12:03.269 "product_name": "Raid Volume", 00:12:03.269 "block_size": 512, 00:12:03.269 "num_blocks": 65536, 00:12:03.269 "uuid": "33ccc80c-9783-452a-bfe0-f3a6852188c6", 00:12:03.269 "assigned_rate_limits": { 00:12:03.269 "rw_ios_per_sec": 0, 00:12:03.269 "rw_mbytes_per_sec": 0, 00:12:03.269 "r_mbytes_per_sec": 0, 00:12:03.269 "w_mbytes_per_sec": 0 00:12:03.269 }, 00:12:03.269 "claimed": false, 00:12:03.269 "zoned": false, 00:12:03.270 "supported_io_types": { 00:12:03.270 "read": true, 00:12:03.270 "write": true, 00:12:03.270 "unmap": false, 00:12:03.270 "flush": false, 00:12:03.270 "reset": true, 00:12:03.270 "nvme_admin": false, 00:12:03.270 "nvme_io": false, 00:12:03.270 "nvme_io_md": false, 00:12:03.270 "write_zeroes": true, 00:12:03.270 "zcopy": false, 00:12:03.270 "get_zone_info": false, 00:12:03.270 "zone_management": false, 00:12:03.270 "zone_append": false, 00:12:03.270 "compare": false, 00:12:03.270 "compare_and_write": false, 00:12:03.270 "abort": false, 00:12:03.270 "seek_hole": false, 00:12:03.270 "seek_data": false, 00:12:03.270 "copy": false, 00:12:03.270 "nvme_iov_md": false 00:12:03.270 }, 00:12:03.270 "memory_domains": [ 00:12:03.270 { 00:12:03.270 "dma_device_id": "system", 00:12:03.270 "dma_device_type": 1 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.270 "dma_device_type": 2 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "dma_device_id": "system", 00:12:03.270 "dma_device_type": 1 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.270 "dma_device_type": 2 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "dma_device_id": "system", 00:12:03.270 "dma_device_type": 1 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.270 "dma_device_type": 2 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "dma_device_id": "system", 00:12:03.270 "dma_device_type": 1 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.270 "dma_device_type": 2 00:12:03.270 } 00:12:03.270 ], 00:12:03.270 "driver_specific": { 00:12:03.270 "raid": { 00:12:03.270 "uuid": "33ccc80c-9783-452a-bfe0-f3a6852188c6", 00:12:03.270 "strip_size_kb": 0, 00:12:03.270 "state": "online", 00:12:03.270 "raid_level": "raid1", 00:12:03.270 "superblock": false, 00:12:03.270 "num_base_bdevs": 4, 00:12:03.270 "num_base_bdevs_discovered": 4, 00:12:03.270 "num_base_bdevs_operational": 4, 00:12:03.270 "base_bdevs_list": [ 00:12:03.270 { 00:12:03.270 "name": "NewBaseBdev", 00:12:03.270 "uuid": "88ca7188-74ec-4f42-9ec4-3bc408abde76", 00:12:03.270 "is_configured": true, 00:12:03.270 "data_offset": 0, 00:12:03.270 "data_size": 65536 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "name": "BaseBdev2", 00:12:03.270 "uuid": "072c9471-dd99-4ad8-853b-370b4d8da9a8", 00:12:03.270 "is_configured": true, 00:12:03.270 "data_offset": 0, 00:12:03.270 "data_size": 65536 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "name": "BaseBdev3", 00:12:03.270 "uuid": "616231c7-b601-4077-8ab9-a6500cd2cbd3", 00:12:03.270 "is_configured": true, 00:12:03.270 "data_offset": 0, 00:12:03.270 "data_size": 65536 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "name": "BaseBdev4", 00:12:03.270 "uuid": "875df570-5c68-4b10-ae73-8b06ae79e8d6", 00:12:03.270 "is_configured": true, 00:12:03.270 "data_offset": 0, 00:12:03.270 "data_size": 65536 00:12:03.270 } 00:12:03.270 ] 00:12:03.270 } 00:12:03.270 } 00:12:03.270 }' 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:03.270 BaseBdev2 00:12:03.270 BaseBdev3 00:12:03.270 BaseBdev4' 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.270 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.531 [2024-12-12 16:08:29.789590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.531 [2024-12-12 16:08:29.789644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.531 [2024-12-12 16:08:29.789773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.531 [2024-12-12 16:08:29.790200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.531 [2024-12-12 16:08:29.790224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75227 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75227 ']' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75227 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75227 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75227' 00:12:03.531 killing process with pid 75227 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75227 00:12:03.531 [2024-12-12 16:08:29.839314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.531 16:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75227 00:12:04.100 [2024-12-12 16:08:30.357303] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.478 16:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:05.478 00:12:05.478 real 0m12.838s 00:12:05.478 user 0m19.957s 00:12:05.478 sys 0m2.323s 00:12:05.478 16:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.478 16:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.478 ************************************ 00:12:05.478 END TEST raid_state_function_test 00:12:05.478 ************************************ 00:12:05.737 16:08:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:05.737 16:08:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:05.737 16:08:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.737 16:08:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.737 ************************************ 00:12:05.737 START TEST raid_state_function_test_sb 00:12:05.737 ************************************ 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:05.737 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75909 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75909' 00:12:05.738 Process raid pid: 75909 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75909 00:12:05.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75909 ']' 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.738 16:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.738 [2024-12-12 16:08:32.013334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:05.738 [2024-12-12 16:08:32.013592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.997 [2024-12-12 16:08:32.198217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.257 [2024-12-12 16:08:32.360863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.517 [2024-12-12 16:08:32.633648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.517 [2024-12-12 16:08:32.633825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.778 [2024-12-12 16:08:32.886607] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.778 [2024-12-12 16:08:32.886665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.778 [2024-12-12 16:08:32.886678] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:06.778 [2024-12-12 16:08:32.886689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:06.778 [2024-12-12 16:08:32.886697] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:06.778 [2024-12-12 16:08:32.886706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:06.778 [2024-12-12 16:08:32.886713] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:06.778 [2024-12-12 16:08:32.886723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.778 "name": "Existed_Raid", 00:12:06.778 "uuid": "09da4859-1d48-4e37-9ccf-853f611e9934", 00:12:06.778 "strip_size_kb": 0, 00:12:06.778 "state": "configuring", 00:12:06.778 "raid_level": "raid1", 00:12:06.778 "superblock": true, 00:12:06.778 "num_base_bdevs": 4, 00:12:06.778 "num_base_bdevs_discovered": 0, 00:12:06.778 "num_base_bdevs_operational": 4, 00:12:06.778 "base_bdevs_list": [ 00:12:06.778 { 00:12:06.778 "name": "BaseBdev1", 00:12:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.778 "is_configured": false, 00:12:06.778 "data_offset": 0, 00:12:06.778 "data_size": 0 00:12:06.778 }, 00:12:06.778 { 00:12:06.778 "name": "BaseBdev2", 00:12:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.778 "is_configured": false, 00:12:06.778 "data_offset": 0, 00:12:06.778 "data_size": 0 00:12:06.778 }, 00:12:06.778 { 00:12:06.778 "name": "BaseBdev3", 00:12:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.778 "is_configured": false, 00:12:06.778 "data_offset": 0, 00:12:06.778 "data_size": 0 00:12:06.778 }, 00:12:06.778 { 00:12:06.778 "name": "BaseBdev4", 00:12:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.778 "is_configured": false, 00:12:06.778 "data_offset": 0, 00:12:06.778 "data_size": 0 00:12:06.778 } 00:12:06.778 ] 00:12:06.778 }' 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.778 16:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.038 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:07.038 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.038 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.038 [2024-12-12 16:08:33.381762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:07.038 [2024-12-12 16:08:33.381858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:07.038 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.038 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:07.038 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.038 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.298 [2024-12-12 16:08:33.389735] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.298 [2024-12-12 16:08:33.389817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.298 [2024-12-12 16:08:33.389845] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.298 [2024-12-12 16:08:33.389869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.298 [2024-12-12 16:08:33.389887] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:07.298 [2024-12-12 16:08:33.389935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:07.298 [2024-12-12 16:08:33.389988] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:07.298 [2024-12-12 16:08:33.390010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.298 [2024-12-12 16:08:33.434514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.298 BaseBdev1 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.298 [ 00:12:07.298 { 00:12:07.298 "name": "BaseBdev1", 00:12:07.298 "aliases": [ 00:12:07.298 "63b99d69-cb36-49fd-b4c3-162d34049c4c" 00:12:07.298 ], 00:12:07.298 "product_name": "Malloc disk", 00:12:07.298 "block_size": 512, 00:12:07.298 "num_blocks": 65536, 00:12:07.298 "uuid": "63b99d69-cb36-49fd-b4c3-162d34049c4c", 00:12:07.298 "assigned_rate_limits": { 00:12:07.298 "rw_ios_per_sec": 0, 00:12:07.298 "rw_mbytes_per_sec": 0, 00:12:07.298 "r_mbytes_per_sec": 0, 00:12:07.298 "w_mbytes_per_sec": 0 00:12:07.298 }, 00:12:07.298 "claimed": true, 00:12:07.298 "claim_type": "exclusive_write", 00:12:07.298 "zoned": false, 00:12:07.298 "supported_io_types": { 00:12:07.298 "read": true, 00:12:07.298 "write": true, 00:12:07.298 "unmap": true, 00:12:07.298 "flush": true, 00:12:07.298 "reset": true, 00:12:07.298 "nvme_admin": false, 00:12:07.298 "nvme_io": false, 00:12:07.298 "nvme_io_md": false, 00:12:07.298 "write_zeroes": true, 00:12:07.298 "zcopy": true, 00:12:07.298 "get_zone_info": false, 00:12:07.298 "zone_management": false, 00:12:07.298 "zone_append": false, 00:12:07.298 "compare": false, 00:12:07.298 "compare_and_write": false, 00:12:07.298 "abort": true, 00:12:07.298 "seek_hole": false, 00:12:07.298 "seek_data": false, 00:12:07.298 "copy": true, 00:12:07.298 "nvme_iov_md": false 00:12:07.298 }, 00:12:07.298 "memory_domains": [ 00:12:07.298 { 00:12:07.298 "dma_device_id": "system", 00:12:07.298 "dma_device_type": 1 00:12:07.298 }, 00:12:07.298 { 00:12:07.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.298 "dma_device_type": 2 00:12:07.298 } 00:12:07.298 ], 00:12:07.298 "driver_specific": {} 00:12:07.298 } 00:12:07.298 ] 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.298 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.299 "name": "Existed_Raid", 00:12:07.299 "uuid": "f76cebd8-9321-49fa-b6fc-6f2e4752b34b", 00:12:07.299 "strip_size_kb": 0, 00:12:07.299 "state": "configuring", 00:12:07.299 "raid_level": "raid1", 00:12:07.299 "superblock": true, 00:12:07.299 "num_base_bdevs": 4, 00:12:07.299 "num_base_bdevs_discovered": 1, 00:12:07.299 "num_base_bdevs_operational": 4, 00:12:07.299 "base_bdevs_list": [ 00:12:07.299 { 00:12:07.299 "name": "BaseBdev1", 00:12:07.299 "uuid": "63b99d69-cb36-49fd-b4c3-162d34049c4c", 00:12:07.299 "is_configured": true, 00:12:07.299 "data_offset": 2048, 00:12:07.299 "data_size": 63488 00:12:07.299 }, 00:12:07.299 { 00:12:07.299 "name": "BaseBdev2", 00:12:07.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.299 "is_configured": false, 00:12:07.299 "data_offset": 0, 00:12:07.299 "data_size": 0 00:12:07.299 }, 00:12:07.299 { 00:12:07.299 "name": "BaseBdev3", 00:12:07.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.299 "is_configured": false, 00:12:07.299 "data_offset": 0, 00:12:07.299 "data_size": 0 00:12:07.299 }, 00:12:07.299 { 00:12:07.299 "name": "BaseBdev4", 00:12:07.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.299 "is_configured": false, 00:12:07.299 "data_offset": 0, 00:12:07.299 "data_size": 0 00:12:07.299 } 00:12:07.299 ] 00:12:07.299 }' 00:12:07.299 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.299 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.559 [2024-12-12 16:08:33.881798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:07.559 [2024-12-12 16:08:33.881867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.559 [2024-12-12 16:08:33.893836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.559 [2024-12-12 16:08:33.895763] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.559 [2024-12-12 16:08:33.895808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.559 [2024-12-12 16:08:33.895818] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:07.559 [2024-12-12 16:08:33.895828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:07.559 [2024-12-12 16:08:33.895836] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:07.559 [2024-12-12 16:08:33.895844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.559 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.819 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.819 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.819 "name": "Existed_Raid", 00:12:07.819 "uuid": "dc8ea5c6-c264-400c-8f07-fdae69ddb861", 00:12:07.819 "strip_size_kb": 0, 00:12:07.819 "state": "configuring", 00:12:07.819 "raid_level": "raid1", 00:12:07.819 "superblock": true, 00:12:07.819 "num_base_bdevs": 4, 00:12:07.819 "num_base_bdevs_discovered": 1, 00:12:07.819 "num_base_bdevs_operational": 4, 00:12:07.819 "base_bdevs_list": [ 00:12:07.819 { 00:12:07.819 "name": "BaseBdev1", 00:12:07.819 "uuid": "63b99d69-cb36-49fd-b4c3-162d34049c4c", 00:12:07.819 "is_configured": true, 00:12:07.819 "data_offset": 2048, 00:12:07.819 "data_size": 63488 00:12:07.819 }, 00:12:07.819 { 00:12:07.819 "name": "BaseBdev2", 00:12:07.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.819 "is_configured": false, 00:12:07.819 "data_offset": 0, 00:12:07.819 "data_size": 0 00:12:07.819 }, 00:12:07.819 { 00:12:07.819 "name": "BaseBdev3", 00:12:07.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.819 "is_configured": false, 00:12:07.819 "data_offset": 0, 00:12:07.819 "data_size": 0 00:12:07.819 }, 00:12:07.819 { 00:12:07.819 "name": "BaseBdev4", 00:12:07.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.819 "is_configured": false, 00:12:07.819 "data_offset": 0, 00:12:07.819 "data_size": 0 00:12:07.819 } 00:12:07.819 ] 00:12:07.819 }' 00:12:07.819 16:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.819 16:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.079 [2024-12-12 16:08:34.394972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.079 BaseBdev2 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.079 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.079 [ 00:12:08.079 { 00:12:08.079 "name": "BaseBdev2", 00:12:08.079 "aliases": [ 00:12:08.079 "3008249a-2270-475d-a48f-e0e5407a928d" 00:12:08.079 ], 00:12:08.079 "product_name": "Malloc disk", 00:12:08.079 "block_size": 512, 00:12:08.079 "num_blocks": 65536, 00:12:08.079 "uuid": "3008249a-2270-475d-a48f-e0e5407a928d", 00:12:08.079 "assigned_rate_limits": { 00:12:08.079 "rw_ios_per_sec": 0, 00:12:08.079 "rw_mbytes_per_sec": 0, 00:12:08.079 "r_mbytes_per_sec": 0, 00:12:08.079 "w_mbytes_per_sec": 0 00:12:08.079 }, 00:12:08.079 "claimed": true, 00:12:08.079 "claim_type": "exclusive_write", 00:12:08.079 "zoned": false, 00:12:08.079 "supported_io_types": { 00:12:08.079 "read": true, 00:12:08.079 "write": true, 00:12:08.079 "unmap": true, 00:12:08.079 "flush": true, 00:12:08.079 "reset": true, 00:12:08.079 "nvme_admin": false, 00:12:08.079 "nvme_io": false, 00:12:08.079 "nvme_io_md": false, 00:12:08.079 "write_zeroes": true, 00:12:08.079 "zcopy": true, 00:12:08.079 "get_zone_info": false, 00:12:08.079 "zone_management": false, 00:12:08.079 "zone_append": false, 00:12:08.079 "compare": false, 00:12:08.079 "compare_and_write": false, 00:12:08.079 "abort": true, 00:12:08.079 "seek_hole": false, 00:12:08.079 "seek_data": false, 00:12:08.079 "copy": true, 00:12:08.079 "nvme_iov_md": false 00:12:08.079 }, 00:12:08.079 "memory_domains": [ 00:12:08.079 { 00:12:08.079 "dma_device_id": "system", 00:12:08.338 "dma_device_type": 1 00:12:08.338 }, 00:12:08.338 { 00:12:08.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.338 "dma_device_type": 2 00:12:08.338 } 00:12:08.338 ], 00:12:08.338 "driver_specific": {} 00:12:08.338 } 00:12:08.338 ] 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.338 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.339 "name": "Existed_Raid", 00:12:08.339 "uuid": "dc8ea5c6-c264-400c-8f07-fdae69ddb861", 00:12:08.339 "strip_size_kb": 0, 00:12:08.339 "state": "configuring", 00:12:08.339 "raid_level": "raid1", 00:12:08.339 "superblock": true, 00:12:08.339 "num_base_bdevs": 4, 00:12:08.339 "num_base_bdevs_discovered": 2, 00:12:08.339 "num_base_bdevs_operational": 4, 00:12:08.339 "base_bdevs_list": [ 00:12:08.339 { 00:12:08.339 "name": "BaseBdev1", 00:12:08.339 "uuid": "63b99d69-cb36-49fd-b4c3-162d34049c4c", 00:12:08.339 "is_configured": true, 00:12:08.339 "data_offset": 2048, 00:12:08.339 "data_size": 63488 00:12:08.339 }, 00:12:08.339 { 00:12:08.339 "name": "BaseBdev2", 00:12:08.339 "uuid": "3008249a-2270-475d-a48f-e0e5407a928d", 00:12:08.339 "is_configured": true, 00:12:08.339 "data_offset": 2048, 00:12:08.339 "data_size": 63488 00:12:08.339 }, 00:12:08.339 { 00:12:08.339 "name": "BaseBdev3", 00:12:08.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.339 "is_configured": false, 00:12:08.339 "data_offset": 0, 00:12:08.339 "data_size": 0 00:12:08.339 }, 00:12:08.339 { 00:12:08.339 "name": "BaseBdev4", 00:12:08.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.339 "is_configured": false, 00:12:08.339 "data_offset": 0, 00:12:08.339 "data_size": 0 00:12:08.339 } 00:12:08.339 ] 00:12:08.339 }' 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.339 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.598 [2024-12-12 16:08:34.936729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.598 BaseBdev3 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.598 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.857 [ 00:12:08.857 { 00:12:08.857 "name": "BaseBdev3", 00:12:08.857 "aliases": [ 00:12:08.857 "b5577a5e-5f85-45b6-ba42-a7680f02248c" 00:12:08.857 ], 00:12:08.857 "product_name": "Malloc disk", 00:12:08.857 "block_size": 512, 00:12:08.857 "num_blocks": 65536, 00:12:08.857 "uuid": "b5577a5e-5f85-45b6-ba42-a7680f02248c", 00:12:08.857 "assigned_rate_limits": { 00:12:08.857 "rw_ios_per_sec": 0, 00:12:08.857 "rw_mbytes_per_sec": 0, 00:12:08.857 "r_mbytes_per_sec": 0, 00:12:08.857 "w_mbytes_per_sec": 0 00:12:08.857 }, 00:12:08.857 "claimed": true, 00:12:08.857 "claim_type": "exclusive_write", 00:12:08.857 "zoned": false, 00:12:08.857 "supported_io_types": { 00:12:08.857 "read": true, 00:12:08.857 "write": true, 00:12:08.857 "unmap": true, 00:12:08.857 "flush": true, 00:12:08.857 "reset": true, 00:12:08.857 "nvme_admin": false, 00:12:08.857 "nvme_io": false, 00:12:08.857 "nvme_io_md": false, 00:12:08.857 "write_zeroes": true, 00:12:08.857 "zcopy": true, 00:12:08.857 "get_zone_info": false, 00:12:08.857 "zone_management": false, 00:12:08.857 "zone_append": false, 00:12:08.857 "compare": false, 00:12:08.857 "compare_and_write": false, 00:12:08.857 "abort": true, 00:12:08.857 "seek_hole": false, 00:12:08.857 "seek_data": false, 00:12:08.857 "copy": true, 00:12:08.857 "nvme_iov_md": false 00:12:08.857 }, 00:12:08.857 "memory_domains": [ 00:12:08.857 { 00:12:08.857 "dma_device_id": "system", 00:12:08.857 "dma_device_type": 1 00:12:08.857 }, 00:12:08.857 { 00:12:08.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.857 "dma_device_type": 2 00:12:08.857 } 00:12:08.857 ], 00:12:08.857 "driver_specific": {} 00:12:08.857 } 00:12:08.857 ] 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.857 16:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.858 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.858 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.858 "name": "Existed_Raid", 00:12:08.858 "uuid": "dc8ea5c6-c264-400c-8f07-fdae69ddb861", 00:12:08.858 "strip_size_kb": 0, 00:12:08.858 "state": "configuring", 00:12:08.858 "raid_level": "raid1", 00:12:08.858 "superblock": true, 00:12:08.858 "num_base_bdevs": 4, 00:12:08.858 "num_base_bdevs_discovered": 3, 00:12:08.858 "num_base_bdevs_operational": 4, 00:12:08.858 "base_bdevs_list": [ 00:12:08.858 { 00:12:08.858 "name": "BaseBdev1", 00:12:08.858 "uuid": "63b99d69-cb36-49fd-b4c3-162d34049c4c", 00:12:08.858 "is_configured": true, 00:12:08.858 "data_offset": 2048, 00:12:08.858 "data_size": 63488 00:12:08.858 }, 00:12:08.858 { 00:12:08.858 "name": "BaseBdev2", 00:12:08.858 "uuid": "3008249a-2270-475d-a48f-e0e5407a928d", 00:12:08.858 "is_configured": true, 00:12:08.858 "data_offset": 2048, 00:12:08.858 "data_size": 63488 00:12:08.858 }, 00:12:08.858 { 00:12:08.858 "name": "BaseBdev3", 00:12:08.858 "uuid": "b5577a5e-5f85-45b6-ba42-a7680f02248c", 00:12:08.858 "is_configured": true, 00:12:08.858 "data_offset": 2048, 00:12:08.858 "data_size": 63488 00:12:08.858 }, 00:12:08.858 { 00:12:08.858 "name": "BaseBdev4", 00:12:08.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.858 "is_configured": false, 00:12:08.858 "data_offset": 0, 00:12:08.858 "data_size": 0 00:12:08.858 } 00:12:08.858 ] 00:12:08.858 }' 00:12:08.858 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.858 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.116 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:09.116 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.116 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 [2024-12-12 16:08:35.481693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:09.376 [2024-12-12 16:08:35.482211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:09.376 [2024-12-12 16:08:35.482280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.376 [2024-12-12 16:08:35.482626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:09.376 [2024-12-12 16:08:35.482866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:09.376 [2024-12-12 16:08:35.482945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:12:09.376 id_bdev 0x617000007e80 00:12:09.376 [2024-12-12 16:08:35.483185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 [ 00:12:09.376 { 00:12:09.376 "name": "BaseBdev4", 00:12:09.376 "aliases": [ 00:12:09.376 "a2066d99-532d-4404-9bb4-f32582bc1fcf" 00:12:09.376 ], 00:12:09.376 "product_name": "Malloc disk", 00:12:09.376 "block_size": 512, 00:12:09.376 "num_blocks": 65536, 00:12:09.376 "uuid": "a2066d99-532d-4404-9bb4-f32582bc1fcf", 00:12:09.376 "assigned_rate_limits": { 00:12:09.376 "rw_ios_per_sec": 0, 00:12:09.376 "rw_mbytes_per_sec": 0, 00:12:09.376 "r_mbytes_per_sec": 0, 00:12:09.376 "w_mbytes_per_sec": 0 00:12:09.376 }, 00:12:09.376 "claimed": true, 00:12:09.376 "claim_type": "exclusive_write", 00:12:09.376 "zoned": false, 00:12:09.376 "supported_io_types": { 00:12:09.376 "read": true, 00:12:09.376 "write": true, 00:12:09.376 "unmap": true, 00:12:09.376 "flush": true, 00:12:09.376 "reset": true, 00:12:09.376 "nvme_admin": false, 00:12:09.376 "nvme_io": false, 00:12:09.376 "nvme_io_md": false, 00:12:09.376 "write_zeroes": true, 00:12:09.376 "zcopy": true, 00:12:09.376 "get_zone_info": false, 00:12:09.377 "zone_management": false, 00:12:09.377 "zone_append": false, 00:12:09.377 "compare": false, 00:12:09.377 "compare_and_write": false, 00:12:09.377 "abort": true, 00:12:09.377 "seek_hole": false, 00:12:09.377 "seek_data": false, 00:12:09.377 "copy": true, 00:12:09.377 "nvme_iov_md": false 00:12:09.377 }, 00:12:09.377 "memory_domains": [ 00:12:09.377 { 00:12:09.377 "dma_device_id": "system", 00:12:09.377 "dma_device_type": 1 00:12:09.377 }, 00:12:09.377 { 00:12:09.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.377 "dma_device_type": 2 00:12:09.377 } 00:12:09.377 ], 00:12:09.377 "driver_specific": {} 00:12:09.377 } 00:12:09.377 ] 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.377 "name": "Existed_Raid", 00:12:09.377 "uuid": "dc8ea5c6-c264-400c-8f07-fdae69ddb861", 00:12:09.377 "strip_size_kb": 0, 00:12:09.377 "state": "online", 00:12:09.377 "raid_level": "raid1", 00:12:09.377 "superblock": true, 00:12:09.377 "num_base_bdevs": 4, 00:12:09.377 "num_base_bdevs_discovered": 4, 00:12:09.377 "num_base_bdevs_operational": 4, 00:12:09.377 "base_bdevs_list": [ 00:12:09.377 { 00:12:09.377 "name": "BaseBdev1", 00:12:09.377 "uuid": "63b99d69-cb36-49fd-b4c3-162d34049c4c", 00:12:09.377 "is_configured": true, 00:12:09.377 "data_offset": 2048, 00:12:09.377 "data_size": 63488 00:12:09.377 }, 00:12:09.377 { 00:12:09.377 "name": "BaseBdev2", 00:12:09.377 "uuid": "3008249a-2270-475d-a48f-e0e5407a928d", 00:12:09.377 "is_configured": true, 00:12:09.377 "data_offset": 2048, 00:12:09.377 "data_size": 63488 00:12:09.377 }, 00:12:09.377 { 00:12:09.377 "name": "BaseBdev3", 00:12:09.377 "uuid": "b5577a5e-5f85-45b6-ba42-a7680f02248c", 00:12:09.377 "is_configured": true, 00:12:09.377 "data_offset": 2048, 00:12:09.377 "data_size": 63488 00:12:09.377 }, 00:12:09.377 { 00:12:09.377 "name": "BaseBdev4", 00:12:09.377 "uuid": "a2066d99-532d-4404-9bb4-f32582bc1fcf", 00:12:09.377 "is_configured": true, 00:12:09.377 "data_offset": 2048, 00:12:09.377 "data_size": 63488 00:12:09.377 } 00:12:09.377 ] 00:12:09.377 }' 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.377 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.637 [2024-12-12 16:08:35.929393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.637 "name": "Existed_Raid", 00:12:09.637 "aliases": [ 00:12:09.637 "dc8ea5c6-c264-400c-8f07-fdae69ddb861" 00:12:09.637 ], 00:12:09.637 "product_name": "Raid Volume", 00:12:09.637 "block_size": 512, 00:12:09.637 "num_blocks": 63488, 00:12:09.637 "uuid": "dc8ea5c6-c264-400c-8f07-fdae69ddb861", 00:12:09.637 "assigned_rate_limits": { 00:12:09.637 "rw_ios_per_sec": 0, 00:12:09.637 "rw_mbytes_per_sec": 0, 00:12:09.637 "r_mbytes_per_sec": 0, 00:12:09.637 "w_mbytes_per_sec": 0 00:12:09.637 }, 00:12:09.637 "claimed": false, 00:12:09.637 "zoned": false, 00:12:09.637 "supported_io_types": { 00:12:09.637 "read": true, 00:12:09.637 "write": true, 00:12:09.637 "unmap": false, 00:12:09.637 "flush": false, 00:12:09.637 "reset": true, 00:12:09.637 "nvme_admin": false, 00:12:09.637 "nvme_io": false, 00:12:09.637 "nvme_io_md": false, 00:12:09.637 "write_zeroes": true, 00:12:09.637 "zcopy": false, 00:12:09.637 "get_zone_info": false, 00:12:09.637 "zone_management": false, 00:12:09.637 "zone_append": false, 00:12:09.637 "compare": false, 00:12:09.637 "compare_and_write": false, 00:12:09.637 "abort": false, 00:12:09.637 "seek_hole": false, 00:12:09.637 "seek_data": false, 00:12:09.637 "copy": false, 00:12:09.637 "nvme_iov_md": false 00:12:09.637 }, 00:12:09.637 "memory_domains": [ 00:12:09.637 { 00:12:09.637 "dma_device_id": "system", 00:12:09.637 "dma_device_type": 1 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.637 "dma_device_type": 2 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "dma_device_id": "system", 00:12:09.637 "dma_device_type": 1 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.637 "dma_device_type": 2 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "dma_device_id": "system", 00:12:09.637 "dma_device_type": 1 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.637 "dma_device_type": 2 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "dma_device_id": "system", 00:12:09.637 "dma_device_type": 1 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.637 "dma_device_type": 2 00:12:09.637 } 00:12:09.637 ], 00:12:09.637 "driver_specific": { 00:12:09.637 "raid": { 00:12:09.637 "uuid": "dc8ea5c6-c264-400c-8f07-fdae69ddb861", 00:12:09.637 "strip_size_kb": 0, 00:12:09.637 "state": "online", 00:12:09.637 "raid_level": "raid1", 00:12:09.637 "superblock": true, 00:12:09.637 "num_base_bdevs": 4, 00:12:09.637 "num_base_bdevs_discovered": 4, 00:12:09.637 "num_base_bdevs_operational": 4, 00:12:09.637 "base_bdevs_list": [ 00:12:09.637 { 00:12:09.637 "name": "BaseBdev1", 00:12:09.637 "uuid": "63b99d69-cb36-49fd-b4c3-162d34049c4c", 00:12:09.637 "is_configured": true, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "name": "BaseBdev2", 00:12:09.637 "uuid": "3008249a-2270-475d-a48f-e0e5407a928d", 00:12:09.637 "is_configured": true, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "name": "BaseBdev3", 00:12:09.637 "uuid": "b5577a5e-5f85-45b6-ba42-a7680f02248c", 00:12:09.637 "is_configured": true, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "name": "BaseBdev4", 00:12:09.637 "uuid": "a2066d99-532d-4404-9bb4-f32582bc1fcf", 00:12:09.637 "is_configured": true, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 } 00:12:09.637 ] 00:12:09.637 } 00:12:09.637 } 00:12:09.637 }' 00:12:09.637 16:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:09.897 BaseBdev2 00:12:09.897 BaseBdev3 00:12:09.897 BaseBdev4' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.157 [2024-12-12 16:08:36.272480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.157 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.157 "name": "Existed_Raid", 00:12:10.157 "uuid": "dc8ea5c6-c264-400c-8f07-fdae69ddb861", 00:12:10.157 "strip_size_kb": 0, 00:12:10.157 "state": "online", 00:12:10.157 "raid_level": "raid1", 00:12:10.157 "superblock": true, 00:12:10.157 "num_base_bdevs": 4, 00:12:10.157 "num_base_bdevs_discovered": 3, 00:12:10.157 "num_base_bdevs_operational": 3, 00:12:10.157 "base_bdevs_list": [ 00:12:10.157 { 00:12:10.157 "name": null, 00:12:10.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.157 "is_configured": false, 00:12:10.157 "data_offset": 0, 00:12:10.157 "data_size": 63488 00:12:10.157 }, 00:12:10.157 { 00:12:10.157 "name": "BaseBdev2", 00:12:10.157 "uuid": "3008249a-2270-475d-a48f-e0e5407a928d", 00:12:10.157 "is_configured": true, 00:12:10.157 "data_offset": 2048, 00:12:10.158 "data_size": 63488 00:12:10.158 }, 00:12:10.158 { 00:12:10.158 "name": "BaseBdev3", 00:12:10.158 "uuid": "b5577a5e-5f85-45b6-ba42-a7680f02248c", 00:12:10.158 "is_configured": true, 00:12:10.158 "data_offset": 2048, 00:12:10.158 "data_size": 63488 00:12:10.158 }, 00:12:10.158 { 00:12:10.158 "name": "BaseBdev4", 00:12:10.158 "uuid": "a2066d99-532d-4404-9bb4-f32582bc1fcf", 00:12:10.158 "is_configured": true, 00:12:10.158 "data_offset": 2048, 00:12:10.158 "data_size": 63488 00:12:10.158 } 00:12:10.158 ] 00:12:10.158 }' 00:12:10.158 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.158 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.511 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:10.511 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.511 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.511 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:10.511 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.511 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.511 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.785 [2024-12-12 16:08:36.847969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.785 16:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.785 [2024-12-12 16:08:37.011505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.785 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.045 [2024-12-12 16:08:37.176216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:11.045 [2024-12-12 16:08:37.176369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.045 [2024-12-12 16:08:37.280681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.045 [2024-12-12 16:08:37.280760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.045 [2024-12-12 16:08:37.280776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.045 BaseBdev2 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.045 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.305 [ 00:12:11.305 { 00:12:11.305 "name": "BaseBdev2", 00:12:11.305 "aliases": [ 00:12:11.305 "bbc929c3-69b2-4fa3-af66-817474ff65ef" 00:12:11.305 ], 00:12:11.305 "product_name": "Malloc disk", 00:12:11.305 "block_size": 512, 00:12:11.305 "num_blocks": 65536, 00:12:11.305 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:11.305 "assigned_rate_limits": { 00:12:11.305 "rw_ios_per_sec": 0, 00:12:11.305 "rw_mbytes_per_sec": 0, 00:12:11.305 "r_mbytes_per_sec": 0, 00:12:11.305 "w_mbytes_per_sec": 0 00:12:11.305 }, 00:12:11.305 "claimed": false, 00:12:11.305 "zoned": false, 00:12:11.305 "supported_io_types": { 00:12:11.305 "read": true, 00:12:11.305 "write": true, 00:12:11.305 "unmap": true, 00:12:11.305 "flush": true, 00:12:11.305 "reset": true, 00:12:11.305 "nvme_admin": false, 00:12:11.305 "nvme_io": false, 00:12:11.305 "nvme_io_md": false, 00:12:11.305 "write_zeroes": true, 00:12:11.305 "zcopy": true, 00:12:11.305 "get_zone_info": false, 00:12:11.305 "zone_management": false, 00:12:11.305 "zone_append": false, 00:12:11.305 "compare": false, 00:12:11.305 "compare_and_write": false, 00:12:11.305 "abort": true, 00:12:11.305 "seek_hole": false, 00:12:11.305 "seek_data": false, 00:12:11.305 "copy": true, 00:12:11.305 "nvme_iov_md": false 00:12:11.305 }, 00:12:11.305 "memory_domains": [ 00:12:11.305 { 00:12:11.305 "dma_device_id": "system", 00:12:11.305 "dma_device_type": 1 00:12:11.305 }, 00:12:11.305 { 00:12:11.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.305 "dma_device_type": 2 00:12:11.305 } 00:12:11.305 ], 00:12:11.305 "driver_specific": {} 00:12:11.305 } 00:12:11.305 ] 00:12:11.305 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.305 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 BaseBdev3 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 [ 00:12:11.306 { 00:12:11.306 "name": "BaseBdev3", 00:12:11.306 "aliases": [ 00:12:11.306 "2771bde3-1f5b-4f04-8471-3c295e0dd85b" 00:12:11.306 ], 00:12:11.306 "product_name": "Malloc disk", 00:12:11.306 "block_size": 512, 00:12:11.306 "num_blocks": 65536, 00:12:11.306 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:11.306 "assigned_rate_limits": { 00:12:11.306 "rw_ios_per_sec": 0, 00:12:11.306 "rw_mbytes_per_sec": 0, 00:12:11.306 "r_mbytes_per_sec": 0, 00:12:11.306 "w_mbytes_per_sec": 0 00:12:11.306 }, 00:12:11.306 "claimed": false, 00:12:11.306 "zoned": false, 00:12:11.306 "supported_io_types": { 00:12:11.306 "read": true, 00:12:11.306 "write": true, 00:12:11.306 "unmap": true, 00:12:11.306 "flush": true, 00:12:11.306 "reset": true, 00:12:11.306 "nvme_admin": false, 00:12:11.306 "nvme_io": false, 00:12:11.306 "nvme_io_md": false, 00:12:11.306 "write_zeroes": true, 00:12:11.306 "zcopy": true, 00:12:11.306 "get_zone_info": false, 00:12:11.306 "zone_management": false, 00:12:11.306 "zone_append": false, 00:12:11.306 "compare": false, 00:12:11.306 "compare_and_write": false, 00:12:11.306 "abort": true, 00:12:11.306 "seek_hole": false, 00:12:11.306 "seek_data": false, 00:12:11.306 "copy": true, 00:12:11.306 "nvme_iov_md": false 00:12:11.306 }, 00:12:11.306 "memory_domains": [ 00:12:11.306 { 00:12:11.306 "dma_device_id": "system", 00:12:11.306 "dma_device_type": 1 00:12:11.306 }, 00:12:11.306 { 00:12:11.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.306 "dma_device_type": 2 00:12:11.306 } 00:12:11.306 ], 00:12:11.306 "driver_specific": {} 00:12:11.306 } 00:12:11.306 ] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 BaseBdev4 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 [ 00:12:11.306 { 00:12:11.306 "name": "BaseBdev4", 00:12:11.306 "aliases": [ 00:12:11.306 "7ca29541-7fe3-4082-84e8-efe473c5e732" 00:12:11.306 ], 00:12:11.306 "product_name": "Malloc disk", 00:12:11.306 "block_size": 512, 00:12:11.306 "num_blocks": 65536, 00:12:11.306 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:11.306 "assigned_rate_limits": { 00:12:11.306 "rw_ios_per_sec": 0, 00:12:11.306 "rw_mbytes_per_sec": 0, 00:12:11.306 "r_mbytes_per_sec": 0, 00:12:11.306 "w_mbytes_per_sec": 0 00:12:11.306 }, 00:12:11.306 "claimed": false, 00:12:11.306 "zoned": false, 00:12:11.306 "supported_io_types": { 00:12:11.306 "read": true, 00:12:11.306 "write": true, 00:12:11.306 "unmap": true, 00:12:11.306 "flush": true, 00:12:11.306 "reset": true, 00:12:11.306 "nvme_admin": false, 00:12:11.306 "nvme_io": false, 00:12:11.306 "nvme_io_md": false, 00:12:11.306 "write_zeroes": true, 00:12:11.306 "zcopy": true, 00:12:11.306 "get_zone_info": false, 00:12:11.306 "zone_management": false, 00:12:11.306 "zone_append": false, 00:12:11.306 "compare": false, 00:12:11.306 "compare_and_write": false, 00:12:11.306 "abort": true, 00:12:11.306 "seek_hole": false, 00:12:11.306 "seek_data": false, 00:12:11.306 "copy": true, 00:12:11.306 "nvme_iov_md": false 00:12:11.306 }, 00:12:11.306 "memory_domains": [ 00:12:11.306 { 00:12:11.306 "dma_device_id": "system", 00:12:11.306 "dma_device_type": 1 00:12:11.306 }, 00:12:11.306 { 00:12:11.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.306 "dma_device_type": 2 00:12:11.306 } 00:12:11.306 ], 00:12:11.306 "driver_specific": {} 00:12:11.306 } 00:12:11.306 ] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 [2024-12-12 16:08:37.605507] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:11.306 [2024-12-12 16:08:37.605657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:11.306 [2024-12-12 16:08:37.605709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.306 [2024-12-12 16:08:37.607805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.306 [2024-12-12 16:08:37.607927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.306 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.566 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.566 "name": "Existed_Raid", 00:12:11.566 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:11.566 "strip_size_kb": 0, 00:12:11.566 "state": "configuring", 00:12:11.566 "raid_level": "raid1", 00:12:11.566 "superblock": true, 00:12:11.566 "num_base_bdevs": 4, 00:12:11.566 "num_base_bdevs_discovered": 3, 00:12:11.566 "num_base_bdevs_operational": 4, 00:12:11.566 "base_bdevs_list": [ 00:12:11.566 { 00:12:11.566 "name": "BaseBdev1", 00:12:11.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.566 "is_configured": false, 00:12:11.566 "data_offset": 0, 00:12:11.566 "data_size": 0 00:12:11.566 }, 00:12:11.566 { 00:12:11.566 "name": "BaseBdev2", 00:12:11.566 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:11.566 "is_configured": true, 00:12:11.566 "data_offset": 2048, 00:12:11.566 "data_size": 63488 00:12:11.566 }, 00:12:11.566 { 00:12:11.566 "name": "BaseBdev3", 00:12:11.566 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:11.566 "is_configured": true, 00:12:11.566 "data_offset": 2048, 00:12:11.566 "data_size": 63488 00:12:11.566 }, 00:12:11.566 { 00:12:11.566 "name": "BaseBdev4", 00:12:11.566 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:11.566 "is_configured": true, 00:12:11.566 "data_offset": 2048, 00:12:11.566 "data_size": 63488 00:12:11.566 } 00:12:11.566 ] 00:12:11.566 }' 00:12:11.566 16:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.566 16:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.825 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.826 [2024-12-12 16:08:38.052888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.826 "name": "Existed_Raid", 00:12:11.826 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:11.826 "strip_size_kb": 0, 00:12:11.826 "state": "configuring", 00:12:11.826 "raid_level": "raid1", 00:12:11.826 "superblock": true, 00:12:11.826 "num_base_bdevs": 4, 00:12:11.826 "num_base_bdevs_discovered": 2, 00:12:11.826 "num_base_bdevs_operational": 4, 00:12:11.826 "base_bdevs_list": [ 00:12:11.826 { 00:12:11.826 "name": "BaseBdev1", 00:12:11.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.826 "is_configured": false, 00:12:11.826 "data_offset": 0, 00:12:11.826 "data_size": 0 00:12:11.826 }, 00:12:11.826 { 00:12:11.826 "name": null, 00:12:11.826 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:11.826 "is_configured": false, 00:12:11.826 "data_offset": 0, 00:12:11.826 "data_size": 63488 00:12:11.826 }, 00:12:11.826 { 00:12:11.826 "name": "BaseBdev3", 00:12:11.826 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:11.826 "is_configured": true, 00:12:11.826 "data_offset": 2048, 00:12:11.826 "data_size": 63488 00:12:11.826 }, 00:12:11.826 { 00:12:11.826 "name": "BaseBdev4", 00:12:11.826 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:11.826 "is_configured": true, 00:12:11.826 "data_offset": 2048, 00:12:11.826 "data_size": 63488 00:12:11.826 } 00:12:11.826 ] 00:12:11.826 }' 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.826 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.395 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.396 [2024-12-12 16:08:38.538314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.396 BaseBdev1 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.396 [ 00:12:12.396 { 00:12:12.396 "name": "BaseBdev1", 00:12:12.396 "aliases": [ 00:12:12.396 "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6" 00:12:12.396 ], 00:12:12.396 "product_name": "Malloc disk", 00:12:12.396 "block_size": 512, 00:12:12.396 "num_blocks": 65536, 00:12:12.396 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:12.396 "assigned_rate_limits": { 00:12:12.396 "rw_ios_per_sec": 0, 00:12:12.396 "rw_mbytes_per_sec": 0, 00:12:12.396 "r_mbytes_per_sec": 0, 00:12:12.396 "w_mbytes_per_sec": 0 00:12:12.396 }, 00:12:12.396 "claimed": true, 00:12:12.396 "claim_type": "exclusive_write", 00:12:12.396 "zoned": false, 00:12:12.396 "supported_io_types": { 00:12:12.396 "read": true, 00:12:12.396 "write": true, 00:12:12.396 "unmap": true, 00:12:12.396 "flush": true, 00:12:12.396 "reset": true, 00:12:12.396 "nvme_admin": false, 00:12:12.396 "nvme_io": false, 00:12:12.396 "nvme_io_md": false, 00:12:12.396 "write_zeroes": true, 00:12:12.396 "zcopy": true, 00:12:12.396 "get_zone_info": false, 00:12:12.396 "zone_management": false, 00:12:12.396 "zone_append": false, 00:12:12.396 "compare": false, 00:12:12.396 "compare_and_write": false, 00:12:12.396 "abort": true, 00:12:12.396 "seek_hole": false, 00:12:12.396 "seek_data": false, 00:12:12.396 "copy": true, 00:12:12.396 "nvme_iov_md": false 00:12:12.396 }, 00:12:12.396 "memory_domains": [ 00:12:12.396 { 00:12:12.396 "dma_device_id": "system", 00:12:12.396 "dma_device_type": 1 00:12:12.396 }, 00:12:12.396 { 00:12:12.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.396 "dma_device_type": 2 00:12:12.396 } 00:12:12.396 ], 00:12:12.396 "driver_specific": {} 00:12:12.396 } 00:12:12.396 ] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.396 "name": "Existed_Raid", 00:12:12.396 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:12.396 "strip_size_kb": 0, 00:12:12.396 "state": "configuring", 00:12:12.396 "raid_level": "raid1", 00:12:12.396 "superblock": true, 00:12:12.396 "num_base_bdevs": 4, 00:12:12.396 "num_base_bdevs_discovered": 3, 00:12:12.396 "num_base_bdevs_operational": 4, 00:12:12.396 "base_bdevs_list": [ 00:12:12.396 { 00:12:12.396 "name": "BaseBdev1", 00:12:12.396 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:12.396 "is_configured": true, 00:12:12.396 "data_offset": 2048, 00:12:12.396 "data_size": 63488 00:12:12.396 }, 00:12:12.396 { 00:12:12.396 "name": null, 00:12:12.396 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:12.396 "is_configured": false, 00:12:12.396 "data_offset": 0, 00:12:12.396 "data_size": 63488 00:12:12.396 }, 00:12:12.396 { 00:12:12.396 "name": "BaseBdev3", 00:12:12.396 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:12.396 "is_configured": true, 00:12:12.396 "data_offset": 2048, 00:12:12.396 "data_size": 63488 00:12:12.396 }, 00:12:12.396 { 00:12:12.396 "name": "BaseBdev4", 00:12:12.396 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:12.396 "is_configured": true, 00:12:12.396 "data_offset": 2048, 00:12:12.396 "data_size": 63488 00:12:12.396 } 00:12:12.396 ] 00:12:12.396 }' 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.396 16:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.965 [2024-12-12 16:08:39.069515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.965 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.965 "name": "Existed_Raid", 00:12:12.965 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:12.965 "strip_size_kb": 0, 00:12:12.965 "state": "configuring", 00:12:12.965 "raid_level": "raid1", 00:12:12.965 "superblock": true, 00:12:12.965 "num_base_bdevs": 4, 00:12:12.965 "num_base_bdevs_discovered": 2, 00:12:12.965 "num_base_bdevs_operational": 4, 00:12:12.965 "base_bdevs_list": [ 00:12:12.965 { 00:12:12.965 "name": "BaseBdev1", 00:12:12.965 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:12.965 "is_configured": true, 00:12:12.965 "data_offset": 2048, 00:12:12.965 "data_size": 63488 00:12:12.965 }, 00:12:12.965 { 00:12:12.965 "name": null, 00:12:12.965 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:12.965 "is_configured": false, 00:12:12.965 "data_offset": 0, 00:12:12.965 "data_size": 63488 00:12:12.965 }, 00:12:12.965 { 00:12:12.965 "name": null, 00:12:12.965 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:12.965 "is_configured": false, 00:12:12.965 "data_offset": 0, 00:12:12.965 "data_size": 63488 00:12:12.965 }, 00:12:12.965 { 00:12:12.965 "name": "BaseBdev4", 00:12:12.965 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:12.965 "is_configured": true, 00:12:12.965 "data_offset": 2048, 00:12:12.965 "data_size": 63488 00:12:12.965 } 00:12:12.965 ] 00:12:12.965 }' 00:12:12.966 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.966 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.225 [2024-12-12 16:08:39.564631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.225 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.484 "name": "Existed_Raid", 00:12:13.484 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:13.484 "strip_size_kb": 0, 00:12:13.484 "state": "configuring", 00:12:13.484 "raid_level": "raid1", 00:12:13.484 "superblock": true, 00:12:13.484 "num_base_bdevs": 4, 00:12:13.484 "num_base_bdevs_discovered": 3, 00:12:13.484 "num_base_bdevs_operational": 4, 00:12:13.484 "base_bdevs_list": [ 00:12:13.484 { 00:12:13.484 "name": "BaseBdev1", 00:12:13.484 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:13.484 "is_configured": true, 00:12:13.484 "data_offset": 2048, 00:12:13.484 "data_size": 63488 00:12:13.484 }, 00:12:13.484 { 00:12:13.484 "name": null, 00:12:13.484 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:13.484 "is_configured": false, 00:12:13.484 "data_offset": 0, 00:12:13.484 "data_size": 63488 00:12:13.484 }, 00:12:13.484 { 00:12:13.484 "name": "BaseBdev3", 00:12:13.484 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:13.484 "is_configured": true, 00:12:13.484 "data_offset": 2048, 00:12:13.484 "data_size": 63488 00:12:13.484 }, 00:12:13.484 { 00:12:13.484 "name": "BaseBdev4", 00:12:13.484 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:13.484 "is_configured": true, 00:12:13.484 "data_offset": 2048, 00:12:13.484 "data_size": 63488 00:12:13.484 } 00:12:13.484 ] 00:12:13.484 }' 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.484 16:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.743 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.743 [2024-12-12 16:08:40.087853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.002 "name": "Existed_Raid", 00:12:14.002 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:14.002 "strip_size_kb": 0, 00:12:14.002 "state": "configuring", 00:12:14.002 "raid_level": "raid1", 00:12:14.002 "superblock": true, 00:12:14.002 "num_base_bdevs": 4, 00:12:14.002 "num_base_bdevs_discovered": 2, 00:12:14.002 "num_base_bdevs_operational": 4, 00:12:14.002 "base_bdevs_list": [ 00:12:14.002 { 00:12:14.002 "name": null, 00:12:14.002 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:14.002 "is_configured": false, 00:12:14.002 "data_offset": 0, 00:12:14.002 "data_size": 63488 00:12:14.002 }, 00:12:14.002 { 00:12:14.002 "name": null, 00:12:14.002 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:14.002 "is_configured": false, 00:12:14.002 "data_offset": 0, 00:12:14.002 "data_size": 63488 00:12:14.002 }, 00:12:14.002 { 00:12:14.002 "name": "BaseBdev3", 00:12:14.002 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:14.002 "is_configured": true, 00:12:14.002 "data_offset": 2048, 00:12:14.002 "data_size": 63488 00:12:14.002 }, 00:12:14.002 { 00:12:14.002 "name": "BaseBdev4", 00:12:14.002 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:14.002 "is_configured": true, 00:12:14.002 "data_offset": 2048, 00:12:14.002 "data_size": 63488 00:12:14.002 } 00:12:14.002 ] 00:12:14.002 }' 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.002 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.570 [2024-12-12 16:08:40.749025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.570 "name": "Existed_Raid", 00:12:14.570 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:14.570 "strip_size_kb": 0, 00:12:14.570 "state": "configuring", 00:12:14.570 "raid_level": "raid1", 00:12:14.570 "superblock": true, 00:12:14.570 "num_base_bdevs": 4, 00:12:14.570 "num_base_bdevs_discovered": 3, 00:12:14.570 "num_base_bdevs_operational": 4, 00:12:14.570 "base_bdevs_list": [ 00:12:14.570 { 00:12:14.570 "name": null, 00:12:14.570 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:14.570 "is_configured": false, 00:12:14.570 "data_offset": 0, 00:12:14.570 "data_size": 63488 00:12:14.570 }, 00:12:14.570 { 00:12:14.570 "name": "BaseBdev2", 00:12:14.570 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:14.570 "is_configured": true, 00:12:14.570 "data_offset": 2048, 00:12:14.570 "data_size": 63488 00:12:14.570 }, 00:12:14.570 { 00:12:14.570 "name": "BaseBdev3", 00:12:14.570 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:14.570 "is_configured": true, 00:12:14.570 "data_offset": 2048, 00:12:14.570 "data_size": 63488 00:12:14.570 }, 00:12:14.570 { 00:12:14.570 "name": "BaseBdev4", 00:12:14.570 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:14.570 "is_configured": true, 00:12:14.570 "data_offset": 2048, 00:12:14.570 "data_size": 63488 00:12:14.570 } 00:12:14.570 ] 00:12:14.570 }' 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.570 16:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 [2024-12-12 16:08:41.340472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:15.138 [2024-12-12 16:08:41.340863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:15.138 [2024-12-12 16:08:41.340948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.138 [2024-12-12 16:08:41.341279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:15.138 [2024-12-12 16:08:41.341512] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:15.138 NewBaseBdev 00:12:15.138 [2024-12-12 16:08:41.341560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:15.138 [2024-12-12 16:08:41.341750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 [ 00:12:15.138 { 00:12:15.138 "name": "NewBaseBdev", 00:12:15.138 "aliases": [ 00:12:15.138 "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6" 00:12:15.138 ], 00:12:15.138 "product_name": "Malloc disk", 00:12:15.138 "block_size": 512, 00:12:15.138 "num_blocks": 65536, 00:12:15.138 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:15.138 "assigned_rate_limits": { 00:12:15.138 "rw_ios_per_sec": 0, 00:12:15.138 "rw_mbytes_per_sec": 0, 00:12:15.138 "r_mbytes_per_sec": 0, 00:12:15.138 "w_mbytes_per_sec": 0 00:12:15.138 }, 00:12:15.138 "claimed": true, 00:12:15.138 "claim_type": "exclusive_write", 00:12:15.138 "zoned": false, 00:12:15.138 "supported_io_types": { 00:12:15.138 "read": true, 00:12:15.138 "write": true, 00:12:15.138 "unmap": true, 00:12:15.138 "flush": true, 00:12:15.138 "reset": true, 00:12:15.138 "nvme_admin": false, 00:12:15.138 "nvme_io": false, 00:12:15.138 "nvme_io_md": false, 00:12:15.138 "write_zeroes": true, 00:12:15.138 "zcopy": true, 00:12:15.138 "get_zone_info": false, 00:12:15.138 "zone_management": false, 00:12:15.138 "zone_append": false, 00:12:15.138 "compare": false, 00:12:15.138 "compare_and_write": false, 00:12:15.138 "abort": true, 00:12:15.138 "seek_hole": false, 00:12:15.138 "seek_data": false, 00:12:15.138 "copy": true, 00:12:15.138 "nvme_iov_md": false 00:12:15.138 }, 00:12:15.138 "memory_domains": [ 00:12:15.138 { 00:12:15.138 "dma_device_id": "system", 00:12:15.138 "dma_device_type": 1 00:12:15.138 }, 00:12:15.138 { 00:12:15.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.138 "dma_device_type": 2 00:12:15.138 } 00:12:15.138 ], 00:12:15.138 "driver_specific": {} 00:12:15.138 } 00:12:15.138 ] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.138 "name": "Existed_Raid", 00:12:15.138 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:15.138 "strip_size_kb": 0, 00:12:15.138 "state": "online", 00:12:15.138 "raid_level": "raid1", 00:12:15.138 "superblock": true, 00:12:15.138 "num_base_bdevs": 4, 00:12:15.138 "num_base_bdevs_discovered": 4, 00:12:15.138 "num_base_bdevs_operational": 4, 00:12:15.138 "base_bdevs_list": [ 00:12:15.138 { 00:12:15.138 "name": "NewBaseBdev", 00:12:15.138 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:15.138 "is_configured": true, 00:12:15.138 "data_offset": 2048, 00:12:15.138 "data_size": 63488 00:12:15.138 }, 00:12:15.138 { 00:12:15.138 "name": "BaseBdev2", 00:12:15.138 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:15.138 "is_configured": true, 00:12:15.138 "data_offset": 2048, 00:12:15.138 "data_size": 63488 00:12:15.138 }, 00:12:15.138 { 00:12:15.138 "name": "BaseBdev3", 00:12:15.138 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:15.138 "is_configured": true, 00:12:15.138 "data_offset": 2048, 00:12:15.138 "data_size": 63488 00:12:15.138 }, 00:12:15.138 { 00:12:15.138 "name": "BaseBdev4", 00:12:15.138 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:15.138 "is_configured": true, 00:12:15.138 "data_offset": 2048, 00:12:15.138 "data_size": 63488 00:12:15.138 } 00:12:15.138 ] 00:12:15.138 }' 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.138 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.397 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.397 [2024-12-12 16:08:41.744317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.657 "name": "Existed_Raid", 00:12:15.657 "aliases": [ 00:12:15.657 "29271de4-a8aa-4d5b-9201-dbcee11aa29c" 00:12:15.657 ], 00:12:15.657 "product_name": "Raid Volume", 00:12:15.657 "block_size": 512, 00:12:15.657 "num_blocks": 63488, 00:12:15.657 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:15.657 "assigned_rate_limits": { 00:12:15.657 "rw_ios_per_sec": 0, 00:12:15.657 "rw_mbytes_per_sec": 0, 00:12:15.657 "r_mbytes_per_sec": 0, 00:12:15.657 "w_mbytes_per_sec": 0 00:12:15.657 }, 00:12:15.657 "claimed": false, 00:12:15.657 "zoned": false, 00:12:15.657 "supported_io_types": { 00:12:15.657 "read": true, 00:12:15.657 "write": true, 00:12:15.657 "unmap": false, 00:12:15.657 "flush": false, 00:12:15.657 "reset": true, 00:12:15.657 "nvme_admin": false, 00:12:15.657 "nvme_io": false, 00:12:15.657 "nvme_io_md": false, 00:12:15.657 "write_zeroes": true, 00:12:15.657 "zcopy": false, 00:12:15.657 "get_zone_info": false, 00:12:15.657 "zone_management": false, 00:12:15.657 "zone_append": false, 00:12:15.657 "compare": false, 00:12:15.657 "compare_and_write": false, 00:12:15.657 "abort": false, 00:12:15.657 "seek_hole": false, 00:12:15.657 "seek_data": false, 00:12:15.657 "copy": false, 00:12:15.657 "nvme_iov_md": false 00:12:15.657 }, 00:12:15.657 "memory_domains": [ 00:12:15.657 { 00:12:15.657 "dma_device_id": "system", 00:12:15.657 "dma_device_type": 1 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.657 "dma_device_type": 2 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "dma_device_id": "system", 00:12:15.657 "dma_device_type": 1 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.657 "dma_device_type": 2 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "dma_device_id": "system", 00:12:15.657 "dma_device_type": 1 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.657 "dma_device_type": 2 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "dma_device_id": "system", 00:12:15.657 "dma_device_type": 1 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.657 "dma_device_type": 2 00:12:15.657 } 00:12:15.657 ], 00:12:15.657 "driver_specific": { 00:12:15.657 "raid": { 00:12:15.657 "uuid": "29271de4-a8aa-4d5b-9201-dbcee11aa29c", 00:12:15.657 "strip_size_kb": 0, 00:12:15.657 "state": "online", 00:12:15.657 "raid_level": "raid1", 00:12:15.657 "superblock": true, 00:12:15.657 "num_base_bdevs": 4, 00:12:15.657 "num_base_bdevs_discovered": 4, 00:12:15.657 "num_base_bdevs_operational": 4, 00:12:15.657 "base_bdevs_list": [ 00:12:15.657 { 00:12:15.657 "name": "NewBaseBdev", 00:12:15.657 "uuid": "32dcb6eb-cda4-4f77-8b3a-8bf40f06eed6", 00:12:15.657 "is_configured": true, 00:12:15.657 "data_offset": 2048, 00:12:15.657 "data_size": 63488 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "name": "BaseBdev2", 00:12:15.657 "uuid": "bbc929c3-69b2-4fa3-af66-817474ff65ef", 00:12:15.657 "is_configured": true, 00:12:15.657 "data_offset": 2048, 00:12:15.657 "data_size": 63488 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "name": "BaseBdev3", 00:12:15.657 "uuid": "2771bde3-1f5b-4f04-8471-3c295e0dd85b", 00:12:15.657 "is_configured": true, 00:12:15.657 "data_offset": 2048, 00:12:15.657 "data_size": 63488 00:12:15.657 }, 00:12:15.657 { 00:12:15.657 "name": "BaseBdev4", 00:12:15.657 "uuid": "7ca29541-7fe3-4082-84e8-efe473c5e732", 00:12:15.657 "is_configured": true, 00:12:15.657 "data_offset": 2048, 00:12:15.657 "data_size": 63488 00:12:15.657 } 00:12:15.657 ] 00:12:15.657 } 00:12:15.657 } 00:12:15.657 }' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:15.657 BaseBdev2 00:12:15.657 BaseBdev3 00:12:15.657 BaseBdev4' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.657 16:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.657 16:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:15.657 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.657 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.657 16:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.917 [2024-12-12 16:08:42.055683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.917 [2024-12-12 16:08:42.055761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.917 [2024-12-12 16:08:42.055864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.917 [2024-12-12 16:08:42.056215] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.917 [2024-12-12 16:08:42.056286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75909 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75909 ']' 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75909 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:15.917 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.918 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75909 00:12:15.918 killing process with pid 75909 00:12:15.918 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.918 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.918 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75909' 00:12:15.918 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75909 00:12:15.918 [2024-12-12 16:08:42.096527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.918 16:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75909 00:12:16.176 [2024-12-12 16:08:42.508698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.552 16:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:17.552 00:12:17.552 real 0m11.797s 00:12:17.552 user 0m18.529s 00:12:17.552 sys 0m2.211s 00:12:17.552 16:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.552 ************************************ 00:12:17.552 END TEST raid_state_function_test_sb 00:12:17.552 ************************************ 00:12:17.552 16:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.552 16:08:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:17.552 16:08:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.552 16:08:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.552 16:08:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.552 ************************************ 00:12:17.552 START TEST raid_superblock_test 00:12:17.552 ************************************ 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:17.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76581 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76581 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76581 ']' 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.552 16:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.552 [2024-12-12 16:08:43.869098] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:17.552 [2024-12-12 16:08:43.869360] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76581 ] 00:12:17.811 [2024-12-12 16:08:44.051226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.071 [2024-12-12 16:08:44.186259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.071 [2024-12-12 16:08:44.418325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.071 [2024-12-12 16:08:44.418490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:18.642 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 malloc1 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 [2024-12-12 16:08:44.773331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:18.643 [2024-12-12 16:08:44.773499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.643 [2024-12-12 16:08:44.773546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.643 [2024-12-12 16:08:44.773586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.643 [2024-12-12 16:08:44.776114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.643 [2024-12-12 16:08:44.776200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:18.643 pt1 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 malloc2 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 [2024-12-12 16:08:44.834566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.643 [2024-12-12 16:08:44.834713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.643 [2024-12-12 16:08:44.834760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.643 [2024-12-12 16:08:44.834797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.643 [2024-12-12 16:08:44.837284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.643 [2024-12-12 16:08:44.837384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.643 pt2 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 malloc3 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 [2024-12-12 16:08:44.910291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:18.643 [2024-12-12 16:08:44.910356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.643 [2024-12-12 16:08:44.910381] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:18.643 [2024-12-12 16:08:44.910392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.643 [2024-12-12 16:08:44.912710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.643 [2024-12-12 16:08:44.912753] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:18.643 pt3 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 malloc4 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 [2024-12-12 16:08:44.971372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:18.643 [2024-12-12 16:08:44.971519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.643 [2024-12-12 16:08:44.971565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:18.643 [2024-12-12 16:08:44.971656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.643 [2024-12-12 16:08:44.973943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.643 [2024-12-12 16:08:44.974022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:18.643 pt4 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.643 [2024-12-12 16:08:44.983369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:18.643 [2024-12-12 16:08:44.985400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.643 [2024-12-12 16:08:44.985514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:18.643 [2024-12-12 16:08:44.985603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:18.643 [2024-12-12 16:08:44.985853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.643 [2024-12-12 16:08:44.985922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.643 [2024-12-12 16:08:44.986198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:18.643 [2024-12-12 16:08:44.986397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.643 [2024-12-12 16:08:44.986415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:18.643 [2024-12-12 16:08:44.986562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.643 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.904 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.904 16:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.904 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.904 16:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.904 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.904 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.904 "name": "raid_bdev1", 00:12:18.904 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:18.904 "strip_size_kb": 0, 00:12:18.904 "state": "online", 00:12:18.904 "raid_level": "raid1", 00:12:18.904 "superblock": true, 00:12:18.904 "num_base_bdevs": 4, 00:12:18.904 "num_base_bdevs_discovered": 4, 00:12:18.904 "num_base_bdevs_operational": 4, 00:12:18.904 "base_bdevs_list": [ 00:12:18.904 { 00:12:18.904 "name": "pt1", 00:12:18.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.904 "is_configured": true, 00:12:18.904 "data_offset": 2048, 00:12:18.904 "data_size": 63488 00:12:18.904 }, 00:12:18.904 { 00:12:18.904 "name": "pt2", 00:12:18.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.904 "is_configured": true, 00:12:18.904 "data_offset": 2048, 00:12:18.904 "data_size": 63488 00:12:18.904 }, 00:12:18.904 { 00:12:18.904 "name": "pt3", 00:12:18.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.904 "is_configured": true, 00:12:18.904 "data_offset": 2048, 00:12:18.904 "data_size": 63488 00:12:18.904 }, 00:12:18.904 { 00:12:18.904 "name": "pt4", 00:12:18.904 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.904 "is_configured": true, 00:12:18.904 "data_offset": 2048, 00:12:18.904 "data_size": 63488 00:12:18.904 } 00:12:18.904 ] 00:12:18.904 }' 00:12:18.904 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.904 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.161 [2024-12-12 16:08:45.471031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.161 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.420 "name": "raid_bdev1", 00:12:19.420 "aliases": [ 00:12:19.420 "5eff3211-bca3-41fa-a145-d2955d64950a" 00:12:19.420 ], 00:12:19.420 "product_name": "Raid Volume", 00:12:19.420 "block_size": 512, 00:12:19.420 "num_blocks": 63488, 00:12:19.420 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:19.420 "assigned_rate_limits": { 00:12:19.420 "rw_ios_per_sec": 0, 00:12:19.420 "rw_mbytes_per_sec": 0, 00:12:19.420 "r_mbytes_per_sec": 0, 00:12:19.420 "w_mbytes_per_sec": 0 00:12:19.420 }, 00:12:19.420 "claimed": false, 00:12:19.420 "zoned": false, 00:12:19.420 "supported_io_types": { 00:12:19.420 "read": true, 00:12:19.420 "write": true, 00:12:19.420 "unmap": false, 00:12:19.420 "flush": false, 00:12:19.420 "reset": true, 00:12:19.420 "nvme_admin": false, 00:12:19.420 "nvme_io": false, 00:12:19.420 "nvme_io_md": false, 00:12:19.420 "write_zeroes": true, 00:12:19.420 "zcopy": false, 00:12:19.420 "get_zone_info": false, 00:12:19.420 "zone_management": false, 00:12:19.420 "zone_append": false, 00:12:19.420 "compare": false, 00:12:19.420 "compare_and_write": false, 00:12:19.420 "abort": false, 00:12:19.420 "seek_hole": false, 00:12:19.420 "seek_data": false, 00:12:19.420 "copy": false, 00:12:19.420 "nvme_iov_md": false 00:12:19.420 }, 00:12:19.420 "memory_domains": [ 00:12:19.420 { 00:12:19.420 "dma_device_id": "system", 00:12:19.420 "dma_device_type": 1 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.420 "dma_device_type": 2 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "dma_device_id": "system", 00:12:19.420 "dma_device_type": 1 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.420 "dma_device_type": 2 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "dma_device_id": "system", 00:12:19.420 "dma_device_type": 1 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.420 "dma_device_type": 2 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "dma_device_id": "system", 00:12:19.420 "dma_device_type": 1 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.420 "dma_device_type": 2 00:12:19.420 } 00:12:19.420 ], 00:12:19.420 "driver_specific": { 00:12:19.420 "raid": { 00:12:19.420 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:19.420 "strip_size_kb": 0, 00:12:19.420 "state": "online", 00:12:19.420 "raid_level": "raid1", 00:12:19.420 "superblock": true, 00:12:19.420 "num_base_bdevs": 4, 00:12:19.420 "num_base_bdevs_discovered": 4, 00:12:19.420 "num_base_bdevs_operational": 4, 00:12:19.420 "base_bdevs_list": [ 00:12:19.420 { 00:12:19.420 "name": "pt1", 00:12:19.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.420 "is_configured": true, 00:12:19.420 "data_offset": 2048, 00:12:19.420 "data_size": 63488 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "name": "pt2", 00:12:19.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.420 "is_configured": true, 00:12:19.420 "data_offset": 2048, 00:12:19.420 "data_size": 63488 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "name": "pt3", 00:12:19.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.420 "is_configured": true, 00:12:19.420 "data_offset": 2048, 00:12:19.420 "data_size": 63488 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "name": "pt4", 00:12:19.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.420 "is_configured": true, 00:12:19.420 "data_offset": 2048, 00:12:19.420 "data_size": 63488 00:12:19.420 } 00:12:19.420 ] 00:12:19.420 } 00:12:19.420 } 00:12:19.420 }' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:19.420 pt2 00:12:19.420 pt3 00:12:19.420 pt4' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.421 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.679 [2024-12-12 16:08:45.818286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5eff3211-bca3-41fa-a145-d2955d64950a 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5eff3211-bca3-41fa-a145-d2955d64950a ']' 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.679 [2024-12-12 16:08:45.865946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.679 [2024-12-12 16:08:45.865986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.679 [2024-12-12 16:08:45.866091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.679 [2024-12-12 16:08:45.866190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.679 [2024-12-12 16:08:45.866209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.679 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:19.680 16:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.680 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 [2024-12-12 16:08:46.033748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:19.939 [2024-12-12 16:08:46.036335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:19.939 [2024-12-12 16:08:46.036411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:19.939 [2024-12-12 16:08:46.036461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:19.939 [2024-12-12 16:08:46.036537] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:19.939 [2024-12-12 16:08:46.036617] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:19.939 [2024-12-12 16:08:46.036643] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:19.939 [2024-12-12 16:08:46.036670] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:19.939 [2024-12-12 16:08:46.036689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.939 [2024-12-12 16:08:46.036705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:19.939 request: 00:12:19.939 { 00:12:19.939 "name": "raid_bdev1", 00:12:19.939 "raid_level": "raid1", 00:12:19.939 "base_bdevs": [ 00:12:19.939 "malloc1", 00:12:19.939 "malloc2", 00:12:19.939 "malloc3", 00:12:19.939 "malloc4" 00:12:19.939 ], 00:12:19.939 "superblock": false, 00:12:19.939 "method": "bdev_raid_create", 00:12:19.939 "req_id": 1 00:12:19.939 } 00:12:19.939 Got JSON-RPC error response 00:12:19.939 response: 00:12:19.939 { 00:12:19.939 "code": -17, 00:12:19.939 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:19.939 } 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 [2024-12-12 16:08:46.101583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:19.939 [2024-12-12 16:08:46.101770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.939 [2024-12-12 16:08:46.101815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:19.939 [2024-12-12 16:08:46.101855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.939 [2024-12-12 16:08:46.104397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.939 [2024-12-12 16:08:46.104494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:19.939 [2024-12-12 16:08:46.104641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:19.939 [2024-12-12 16:08:46.104738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:19.939 pt1 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.939 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.940 "name": "raid_bdev1", 00:12:19.940 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:19.940 "strip_size_kb": 0, 00:12:19.940 "state": "configuring", 00:12:19.940 "raid_level": "raid1", 00:12:19.940 "superblock": true, 00:12:19.940 "num_base_bdevs": 4, 00:12:19.940 "num_base_bdevs_discovered": 1, 00:12:19.940 "num_base_bdevs_operational": 4, 00:12:19.940 "base_bdevs_list": [ 00:12:19.940 { 00:12:19.940 "name": "pt1", 00:12:19.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.940 "is_configured": true, 00:12:19.940 "data_offset": 2048, 00:12:19.940 "data_size": 63488 00:12:19.940 }, 00:12:19.940 { 00:12:19.940 "name": null, 00:12:19.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.940 "is_configured": false, 00:12:19.940 "data_offset": 2048, 00:12:19.940 "data_size": 63488 00:12:19.940 }, 00:12:19.940 { 00:12:19.940 "name": null, 00:12:19.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.940 "is_configured": false, 00:12:19.940 "data_offset": 2048, 00:12:19.940 "data_size": 63488 00:12:19.940 }, 00:12:19.940 { 00:12:19.940 "name": null, 00:12:19.940 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.940 "is_configured": false, 00:12:19.940 "data_offset": 2048, 00:12:19.940 "data_size": 63488 00:12:19.940 } 00:12:19.940 ] 00:12:19.940 }' 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.940 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.507 [2024-12-12 16:08:46.604712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:20.507 [2024-12-12 16:08:46.604877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.507 [2024-12-12 16:08:46.604923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:20.507 [2024-12-12 16:08:46.604939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.507 [2024-12-12 16:08:46.605468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.507 [2024-12-12 16:08:46.605493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:20.507 [2024-12-12 16:08:46.605601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:20.507 [2024-12-12 16:08:46.605633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:20.507 pt2 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.507 [2024-12-12 16:08:46.616688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.507 "name": "raid_bdev1", 00:12:20.507 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:20.507 "strip_size_kb": 0, 00:12:20.507 "state": "configuring", 00:12:20.507 "raid_level": "raid1", 00:12:20.507 "superblock": true, 00:12:20.507 "num_base_bdevs": 4, 00:12:20.507 "num_base_bdevs_discovered": 1, 00:12:20.507 "num_base_bdevs_operational": 4, 00:12:20.507 "base_bdevs_list": [ 00:12:20.507 { 00:12:20.507 "name": "pt1", 00:12:20.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.507 "is_configured": true, 00:12:20.507 "data_offset": 2048, 00:12:20.507 "data_size": 63488 00:12:20.507 }, 00:12:20.507 { 00:12:20.507 "name": null, 00:12:20.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.507 "is_configured": false, 00:12:20.507 "data_offset": 0, 00:12:20.507 "data_size": 63488 00:12:20.507 }, 00:12:20.507 { 00:12:20.507 "name": null, 00:12:20.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.507 "is_configured": false, 00:12:20.507 "data_offset": 2048, 00:12:20.507 "data_size": 63488 00:12:20.507 }, 00:12:20.507 { 00:12:20.507 "name": null, 00:12:20.507 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:20.507 "is_configured": false, 00:12:20.507 "data_offset": 2048, 00:12:20.507 "data_size": 63488 00:12:20.507 } 00:12:20.507 ] 00:12:20.507 }' 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.507 16:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.766 [2024-12-12 16:08:47.087926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:20.766 [2024-12-12 16:08:47.088129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.766 [2024-12-12 16:08:47.088162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:20.766 [2024-12-12 16:08:47.088174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.766 [2024-12-12 16:08:47.088734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.766 [2024-12-12 16:08:47.088769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:20.766 [2024-12-12 16:08:47.088880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:20.766 [2024-12-12 16:08:47.088926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:20.766 pt2 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.766 [2024-12-12 16:08:47.099835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:20.766 [2024-12-12 16:08:47.099926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.766 [2024-12-12 16:08:47.099954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:20.766 [2024-12-12 16:08:47.099966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.766 [2024-12-12 16:08:47.100468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.766 [2024-12-12 16:08:47.100506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:20.766 [2024-12-12 16:08:47.100605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:20.766 [2024-12-12 16:08:47.100631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:20.766 pt3 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.766 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.766 [2024-12-12 16:08:47.111810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:20.766 [2024-12-12 16:08:47.111868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.766 [2024-12-12 16:08:47.111907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:20.766 [2024-12-12 16:08:47.111919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.766 [2024-12-12 16:08:47.112439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.766 [2024-12-12 16:08:47.112470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:20.766 [2024-12-12 16:08:47.112565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:20.766 [2024-12-12 16:08:47.112602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:20.766 [2024-12-12 16:08:47.112799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:20.766 [2024-12-12 16:08:47.112811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.766 [2024-12-12 16:08:47.113153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:20.766 [2024-12-12 16:08:47.113368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:20.766 [2024-12-12 16:08:47.113385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:20.766 [2024-12-12 16:08:47.113564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.766 pt4 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.026 "name": "raid_bdev1", 00:12:21.026 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:21.026 "strip_size_kb": 0, 00:12:21.026 "state": "online", 00:12:21.026 "raid_level": "raid1", 00:12:21.026 "superblock": true, 00:12:21.026 "num_base_bdevs": 4, 00:12:21.026 "num_base_bdevs_discovered": 4, 00:12:21.026 "num_base_bdevs_operational": 4, 00:12:21.026 "base_bdevs_list": [ 00:12:21.026 { 00:12:21.026 "name": "pt1", 00:12:21.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.026 "is_configured": true, 00:12:21.026 "data_offset": 2048, 00:12:21.026 "data_size": 63488 00:12:21.026 }, 00:12:21.026 { 00:12:21.026 "name": "pt2", 00:12:21.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.026 "is_configured": true, 00:12:21.026 "data_offset": 2048, 00:12:21.026 "data_size": 63488 00:12:21.026 }, 00:12:21.026 { 00:12:21.026 "name": "pt3", 00:12:21.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.026 "is_configured": true, 00:12:21.026 "data_offset": 2048, 00:12:21.026 "data_size": 63488 00:12:21.026 }, 00:12:21.026 { 00:12:21.026 "name": "pt4", 00:12:21.026 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:21.026 "is_configured": true, 00:12:21.026 "data_offset": 2048, 00:12:21.026 "data_size": 63488 00:12:21.026 } 00:12:21.026 ] 00:12:21.026 }' 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.026 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.286 [2024-12-12 16:08:47.587510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.286 "name": "raid_bdev1", 00:12:21.286 "aliases": [ 00:12:21.286 "5eff3211-bca3-41fa-a145-d2955d64950a" 00:12:21.286 ], 00:12:21.286 "product_name": "Raid Volume", 00:12:21.286 "block_size": 512, 00:12:21.286 "num_blocks": 63488, 00:12:21.286 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:21.286 "assigned_rate_limits": { 00:12:21.286 "rw_ios_per_sec": 0, 00:12:21.286 "rw_mbytes_per_sec": 0, 00:12:21.286 "r_mbytes_per_sec": 0, 00:12:21.286 "w_mbytes_per_sec": 0 00:12:21.286 }, 00:12:21.286 "claimed": false, 00:12:21.286 "zoned": false, 00:12:21.286 "supported_io_types": { 00:12:21.286 "read": true, 00:12:21.286 "write": true, 00:12:21.286 "unmap": false, 00:12:21.286 "flush": false, 00:12:21.286 "reset": true, 00:12:21.286 "nvme_admin": false, 00:12:21.286 "nvme_io": false, 00:12:21.286 "nvme_io_md": false, 00:12:21.286 "write_zeroes": true, 00:12:21.286 "zcopy": false, 00:12:21.286 "get_zone_info": false, 00:12:21.286 "zone_management": false, 00:12:21.286 "zone_append": false, 00:12:21.286 "compare": false, 00:12:21.286 "compare_and_write": false, 00:12:21.286 "abort": false, 00:12:21.286 "seek_hole": false, 00:12:21.286 "seek_data": false, 00:12:21.286 "copy": false, 00:12:21.286 "nvme_iov_md": false 00:12:21.286 }, 00:12:21.286 "memory_domains": [ 00:12:21.286 { 00:12:21.286 "dma_device_id": "system", 00:12:21.286 "dma_device_type": 1 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.286 "dma_device_type": 2 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "dma_device_id": "system", 00:12:21.286 "dma_device_type": 1 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.286 "dma_device_type": 2 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "dma_device_id": "system", 00:12:21.286 "dma_device_type": 1 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.286 "dma_device_type": 2 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "dma_device_id": "system", 00:12:21.286 "dma_device_type": 1 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.286 "dma_device_type": 2 00:12:21.286 } 00:12:21.286 ], 00:12:21.286 "driver_specific": { 00:12:21.286 "raid": { 00:12:21.286 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:21.286 "strip_size_kb": 0, 00:12:21.286 "state": "online", 00:12:21.286 "raid_level": "raid1", 00:12:21.286 "superblock": true, 00:12:21.286 "num_base_bdevs": 4, 00:12:21.286 "num_base_bdevs_discovered": 4, 00:12:21.286 "num_base_bdevs_operational": 4, 00:12:21.286 "base_bdevs_list": [ 00:12:21.286 { 00:12:21.286 "name": "pt1", 00:12:21.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.286 "is_configured": true, 00:12:21.286 "data_offset": 2048, 00:12:21.286 "data_size": 63488 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "name": "pt2", 00:12:21.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.286 "is_configured": true, 00:12:21.286 "data_offset": 2048, 00:12:21.286 "data_size": 63488 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "name": "pt3", 00:12:21.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.286 "is_configured": true, 00:12:21.286 "data_offset": 2048, 00:12:21.286 "data_size": 63488 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "name": "pt4", 00:12:21.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:21.286 "is_configured": true, 00:12:21.286 "data_offset": 2048, 00:12:21.286 "data_size": 63488 00:12:21.286 } 00:12:21.286 ] 00:12:21.286 } 00:12:21.286 } 00:12:21.286 }' 00:12:21.286 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:21.546 pt2 00:12:21.546 pt3 00:12:21.546 pt4' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.546 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.546 [2024-12-12 16:08:47.875099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5eff3211-bca3-41fa-a145-d2955d64950a '!=' 5eff3211-bca3-41fa-a145-d2955d64950a ']' 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.806 [2024-12-12 16:08:47.918793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.806 "name": "raid_bdev1", 00:12:21.806 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:21.806 "strip_size_kb": 0, 00:12:21.806 "state": "online", 00:12:21.806 "raid_level": "raid1", 00:12:21.806 "superblock": true, 00:12:21.806 "num_base_bdevs": 4, 00:12:21.806 "num_base_bdevs_discovered": 3, 00:12:21.806 "num_base_bdevs_operational": 3, 00:12:21.806 "base_bdevs_list": [ 00:12:21.806 { 00:12:21.806 "name": null, 00:12:21.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.806 "is_configured": false, 00:12:21.806 "data_offset": 0, 00:12:21.806 "data_size": 63488 00:12:21.806 }, 00:12:21.806 { 00:12:21.806 "name": "pt2", 00:12:21.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.806 "is_configured": true, 00:12:21.806 "data_offset": 2048, 00:12:21.806 "data_size": 63488 00:12:21.806 }, 00:12:21.806 { 00:12:21.806 "name": "pt3", 00:12:21.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.806 "is_configured": true, 00:12:21.806 "data_offset": 2048, 00:12:21.806 "data_size": 63488 00:12:21.806 }, 00:12:21.806 { 00:12:21.806 "name": "pt4", 00:12:21.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:21.806 "is_configured": true, 00:12:21.806 "data_offset": 2048, 00:12:21.806 "data_size": 63488 00:12:21.806 } 00:12:21.806 ] 00:12:21.806 }' 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.806 16:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.066 [2024-12-12 16:08:48.362083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.066 [2024-12-12 16:08:48.362142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.066 [2024-12-12 16:08:48.362258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.066 [2024-12-12 16:08:48.362357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.066 [2024-12-12 16:08:48.362369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:22.066 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.326 [2024-12-12 16:08:48.457828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.326 [2024-12-12 16:08:48.457931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.326 [2024-12-12 16:08:48.457957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:22.326 [2024-12-12 16:08:48.457969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.326 [2024-12-12 16:08:48.460659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.326 [2024-12-12 16:08:48.460791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.326 [2024-12-12 16:08:48.460926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.326 [2024-12-12 16:08:48.460987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.326 pt2 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.326 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.326 "name": "raid_bdev1", 00:12:22.326 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:22.326 "strip_size_kb": 0, 00:12:22.326 "state": "configuring", 00:12:22.326 "raid_level": "raid1", 00:12:22.326 "superblock": true, 00:12:22.326 "num_base_bdevs": 4, 00:12:22.326 "num_base_bdevs_discovered": 1, 00:12:22.326 "num_base_bdevs_operational": 3, 00:12:22.326 "base_bdevs_list": [ 00:12:22.326 { 00:12:22.326 "name": null, 00:12:22.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.326 "is_configured": false, 00:12:22.326 "data_offset": 2048, 00:12:22.326 "data_size": 63488 00:12:22.326 }, 00:12:22.326 { 00:12:22.326 "name": "pt2", 00:12:22.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.326 "is_configured": true, 00:12:22.326 "data_offset": 2048, 00:12:22.326 "data_size": 63488 00:12:22.326 }, 00:12:22.326 { 00:12:22.326 "name": null, 00:12:22.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.326 "is_configured": false, 00:12:22.326 "data_offset": 2048, 00:12:22.326 "data_size": 63488 00:12:22.326 }, 00:12:22.326 { 00:12:22.326 "name": null, 00:12:22.326 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.326 "is_configured": false, 00:12:22.327 "data_offset": 2048, 00:12:22.327 "data_size": 63488 00:12:22.327 } 00:12:22.327 ] 00:12:22.327 }' 00:12:22.327 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.327 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.586 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:22.586 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:22.586 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:22.586 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.586 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.844 [2024-12-12 16:08:48.941136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:22.844 [2024-12-12 16:08:48.941345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.844 [2024-12-12 16:08:48.941399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:22.844 [2024-12-12 16:08:48.941467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.844 [2024-12-12 16:08:48.942098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.844 [2024-12-12 16:08:48.942179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:22.844 [2024-12-12 16:08:48.942342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:22.844 [2024-12-12 16:08:48.942406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:22.845 pt3 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.845 "name": "raid_bdev1", 00:12:22.845 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:22.845 "strip_size_kb": 0, 00:12:22.845 "state": "configuring", 00:12:22.845 "raid_level": "raid1", 00:12:22.845 "superblock": true, 00:12:22.845 "num_base_bdevs": 4, 00:12:22.845 "num_base_bdevs_discovered": 2, 00:12:22.845 "num_base_bdevs_operational": 3, 00:12:22.845 "base_bdevs_list": [ 00:12:22.845 { 00:12:22.845 "name": null, 00:12:22.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.845 "is_configured": false, 00:12:22.845 "data_offset": 2048, 00:12:22.845 "data_size": 63488 00:12:22.845 }, 00:12:22.845 { 00:12:22.845 "name": "pt2", 00:12:22.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.845 "is_configured": true, 00:12:22.845 "data_offset": 2048, 00:12:22.845 "data_size": 63488 00:12:22.845 }, 00:12:22.845 { 00:12:22.845 "name": "pt3", 00:12:22.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.845 "is_configured": true, 00:12:22.845 "data_offset": 2048, 00:12:22.845 "data_size": 63488 00:12:22.845 }, 00:12:22.845 { 00:12:22.845 "name": null, 00:12:22.845 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.845 "is_configured": false, 00:12:22.845 "data_offset": 2048, 00:12:22.845 "data_size": 63488 00:12:22.845 } 00:12:22.845 ] 00:12:22.845 }' 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.845 16:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.103 [2024-12-12 16:08:49.384466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:23.103 [2024-12-12 16:08:49.384585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.103 [2024-12-12 16:08:49.384620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:23.103 [2024-12-12 16:08:49.384634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.103 [2024-12-12 16:08:49.385247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.103 [2024-12-12 16:08:49.385279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:23.103 [2024-12-12 16:08:49.385408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:23.103 [2024-12-12 16:08:49.385441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:23.103 [2024-12-12 16:08:49.385606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:23.103 [2024-12-12 16:08:49.385617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.103 [2024-12-12 16:08:49.385937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:23.103 [2024-12-12 16:08:49.386120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:23.103 [2024-12-12 16:08:49.386144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:23.103 [2024-12-12 16:08:49.386313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.103 pt4 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.103 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.103 "name": "raid_bdev1", 00:12:23.103 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:23.103 "strip_size_kb": 0, 00:12:23.103 "state": "online", 00:12:23.103 "raid_level": "raid1", 00:12:23.103 "superblock": true, 00:12:23.103 "num_base_bdevs": 4, 00:12:23.103 "num_base_bdevs_discovered": 3, 00:12:23.103 "num_base_bdevs_operational": 3, 00:12:23.103 "base_bdevs_list": [ 00:12:23.103 { 00:12:23.103 "name": null, 00:12:23.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.103 "is_configured": false, 00:12:23.103 "data_offset": 2048, 00:12:23.103 "data_size": 63488 00:12:23.103 }, 00:12:23.103 { 00:12:23.103 "name": "pt2", 00:12:23.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.103 "is_configured": true, 00:12:23.103 "data_offset": 2048, 00:12:23.103 "data_size": 63488 00:12:23.103 }, 00:12:23.103 { 00:12:23.103 "name": "pt3", 00:12:23.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.103 "is_configured": true, 00:12:23.103 "data_offset": 2048, 00:12:23.103 "data_size": 63488 00:12:23.103 }, 00:12:23.103 { 00:12:23.103 "name": "pt4", 00:12:23.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.103 "is_configured": true, 00:12:23.104 "data_offset": 2048, 00:12:23.104 "data_size": 63488 00:12:23.104 } 00:12:23.104 ] 00:12:23.104 }' 00:12:23.104 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.104 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.671 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.671 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.671 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.671 [2024-12-12 16:08:49.907558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.671 [2024-12-12 16:08:49.907730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.671 [2024-12-12 16:08:49.907881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.672 [2024-12-12 16:08:49.908016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.672 [2024-12-12 16:08:49.908082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.672 [2024-12-12 16:08:49.979393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:23.672 [2024-12-12 16:08:49.979546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.672 [2024-12-12 16:08:49.979589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:23.672 [2024-12-12 16:08:49.979608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.672 [2024-12-12 16:08:49.982444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.672 [2024-12-12 16:08:49.982500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:23.672 [2024-12-12 16:08:49.982613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:23.672 [2024-12-12 16:08:49.982686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:23.672 [2024-12-12 16:08:49.982882] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:23.672 [2024-12-12 16:08:49.982928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.672 [2024-12-12 16:08:49.982949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:23.672 [2024-12-12 16:08:49.983031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:23.672 [2024-12-12 16:08:49.983156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:23.672 pt1 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.672 16:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.672 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.931 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.931 "name": "raid_bdev1", 00:12:23.931 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:23.931 "strip_size_kb": 0, 00:12:23.931 "state": "configuring", 00:12:23.931 "raid_level": "raid1", 00:12:23.931 "superblock": true, 00:12:23.931 "num_base_bdevs": 4, 00:12:23.931 "num_base_bdevs_discovered": 2, 00:12:23.931 "num_base_bdevs_operational": 3, 00:12:23.931 "base_bdevs_list": [ 00:12:23.931 { 00:12:23.931 "name": null, 00:12:23.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.931 "is_configured": false, 00:12:23.931 "data_offset": 2048, 00:12:23.931 "data_size": 63488 00:12:23.931 }, 00:12:23.931 { 00:12:23.931 "name": "pt2", 00:12:23.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.931 "is_configured": true, 00:12:23.931 "data_offset": 2048, 00:12:23.931 "data_size": 63488 00:12:23.931 }, 00:12:23.931 { 00:12:23.931 "name": "pt3", 00:12:23.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.931 "is_configured": true, 00:12:23.931 "data_offset": 2048, 00:12:23.931 "data_size": 63488 00:12:23.931 }, 00:12:23.931 { 00:12:23.931 "name": null, 00:12:23.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.931 "is_configured": false, 00:12:23.931 "data_offset": 2048, 00:12:23.931 "data_size": 63488 00:12:23.931 } 00:12:23.931 ] 00:12:23.931 }' 00:12:23.931 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.931 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.190 [2024-12-12 16:08:50.475071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:24.190 [2024-12-12 16:08:50.475257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.190 [2024-12-12 16:08:50.475310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:24.190 [2024-12-12 16:08:50.475350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.190 [2024-12-12 16:08:50.475985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.190 [2024-12-12 16:08:50.476063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:24.190 [2024-12-12 16:08:50.476223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:24.190 [2024-12-12 16:08:50.476291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:24.190 [2024-12-12 16:08:50.476504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:24.190 [2024-12-12 16:08:50.476550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.190 [2024-12-12 16:08:50.476913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:24.190 [2024-12-12 16:08:50.477137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:24.190 [2024-12-12 16:08:50.477189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:24.190 [2024-12-12 16:08:50.477410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.190 pt4 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.190 "name": "raid_bdev1", 00:12:24.190 "uuid": "5eff3211-bca3-41fa-a145-d2955d64950a", 00:12:24.190 "strip_size_kb": 0, 00:12:24.190 "state": "online", 00:12:24.190 "raid_level": "raid1", 00:12:24.190 "superblock": true, 00:12:24.190 "num_base_bdevs": 4, 00:12:24.190 "num_base_bdevs_discovered": 3, 00:12:24.190 "num_base_bdevs_operational": 3, 00:12:24.190 "base_bdevs_list": [ 00:12:24.190 { 00:12:24.190 "name": null, 00:12:24.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.190 "is_configured": false, 00:12:24.190 "data_offset": 2048, 00:12:24.190 "data_size": 63488 00:12:24.190 }, 00:12:24.190 { 00:12:24.190 "name": "pt2", 00:12:24.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.190 "is_configured": true, 00:12:24.190 "data_offset": 2048, 00:12:24.190 "data_size": 63488 00:12:24.190 }, 00:12:24.190 { 00:12:24.190 "name": "pt3", 00:12:24.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.190 "is_configured": true, 00:12:24.190 "data_offset": 2048, 00:12:24.190 "data_size": 63488 00:12:24.190 }, 00:12:24.190 { 00:12:24.190 "name": "pt4", 00:12:24.190 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.190 "is_configured": true, 00:12:24.190 "data_offset": 2048, 00:12:24.190 "data_size": 63488 00:12:24.190 } 00:12:24.190 ] 00:12:24.190 }' 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.190 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:24.756 [2024-12-12 16:08:50.959384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5eff3211-bca3-41fa-a145-d2955d64950a '!=' 5eff3211-bca3-41fa-a145-d2955d64950a ']' 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76581 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76581 ']' 00:12:24.756 16:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76581 00:12:24.756 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:24.756 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.756 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76581 00:12:24.756 killing process with pid 76581 00:12:24.756 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.756 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.756 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76581' 00:12:24.756 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76581 00:12:24.757 [2024-12-12 16:08:51.040017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.757 16:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76581 00:12:24.757 [2024-12-12 16:08:51.040165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.757 [2024-12-12 16:08:51.040269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.757 [2024-12-12 16:08:51.040286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:25.323 [2024-12-12 16:08:51.522312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.699 ************************************ 00:12:26.699 END TEST raid_superblock_test 00:12:26.699 ************************************ 00:12:26.699 16:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:26.699 00:12:26.699 real 0m8.961s 00:12:26.699 user 0m13.882s 00:12:26.699 sys 0m1.737s 00:12:26.699 16:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.699 16:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.699 16:08:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:26.699 16:08:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:26.699 16:08:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.699 16:08:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.699 ************************************ 00:12:26.699 START TEST raid_read_error_test 00:12:26.699 ************************************ 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:26.699 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DhXvwusjYX 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77074 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77074 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 77074 ']' 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.700 16:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.700 [2024-12-12 16:08:52.919468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:26.700 [2024-12-12 16:08:52.919645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77074 ] 00:12:26.958 [2024-12-12 16:08:53.099928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.958 [2024-12-12 16:08:53.239788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.217 [2024-12-12 16:08:53.478363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.217 [2024-12-12 16:08:53.478460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.476 BaseBdev1_malloc 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.476 true 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.476 [2024-12-12 16:08:53.819832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:27.476 [2024-12-12 16:08:53.819919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.476 [2024-12-12 16:08:53.819944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:27.476 [2024-12-12 16:08:53.819958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.476 [2024-12-12 16:08:53.822270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.476 [2024-12-12 16:08:53.822400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.476 BaseBdev1 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.476 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 BaseBdev2_malloc 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 true 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 [2024-12-12 16:08:53.891535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.736 [2024-12-12 16:08:53.891603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.736 [2024-12-12 16:08:53.891639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:27.736 [2024-12-12 16:08:53.891652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.736 [2024-12-12 16:08:53.894006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.736 [2024-12-12 16:08:53.894047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.736 BaseBdev2 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 BaseBdev3_malloc 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 true 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 [2024-12-12 16:08:53.976086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:27.736 [2024-12-12 16:08:53.976145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.736 [2024-12-12 16:08:53.976165] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:27.736 [2024-12-12 16:08:53.976178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.736 [2024-12-12 16:08:53.978490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.736 [2024-12-12 16:08:53.978536] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:27.736 BaseBdev3 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 BaseBdev4_malloc 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 true 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 [2024-12-12 16:08:54.051501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:27.736 [2024-12-12 16:08:54.051652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.736 [2024-12-12 16:08:54.051676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.736 [2024-12-12 16:08:54.051689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.736 [2024-12-12 16:08:54.054013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.736 [2024-12-12 16:08:54.054056] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:27.736 BaseBdev4 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 [2024-12-12 16:08:54.063537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.736 [2024-12-12 16:08:54.065582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.736 [2024-12-12 16:08:54.065661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.736 [2024-12-12 16:08:54.065720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.736 [2024-12-12 16:08:54.065970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:27.736 [2024-12-12 16:08:54.065990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.736 [2024-12-12 16:08:54.066232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:27.736 [2024-12-12 16:08:54.066475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:27.736 [2024-12-12 16:08:54.066488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:27.736 [2024-12-12 16:08:54.066647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.736 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.737 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.996 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.996 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.996 "name": "raid_bdev1", 00:12:27.996 "uuid": "4d17dcb3-156e-48e3-958a-8088f8996ebc", 00:12:27.996 "strip_size_kb": 0, 00:12:27.996 "state": "online", 00:12:27.996 "raid_level": "raid1", 00:12:27.996 "superblock": true, 00:12:27.996 "num_base_bdevs": 4, 00:12:27.996 "num_base_bdevs_discovered": 4, 00:12:27.996 "num_base_bdevs_operational": 4, 00:12:27.996 "base_bdevs_list": [ 00:12:27.996 { 00:12:27.996 "name": "BaseBdev1", 00:12:27.996 "uuid": "89406b63-4688-527f-99d8-7664263206a5", 00:12:27.996 "is_configured": true, 00:12:27.996 "data_offset": 2048, 00:12:27.996 "data_size": 63488 00:12:27.996 }, 00:12:27.996 { 00:12:27.996 "name": "BaseBdev2", 00:12:27.996 "uuid": "cecef824-2796-585f-ac76-a688bb47e7d0", 00:12:27.996 "is_configured": true, 00:12:27.996 "data_offset": 2048, 00:12:27.996 "data_size": 63488 00:12:27.996 }, 00:12:27.996 { 00:12:27.996 "name": "BaseBdev3", 00:12:27.996 "uuid": "6e75f195-22a9-54c1-834f-cf85be163d73", 00:12:27.996 "is_configured": true, 00:12:27.996 "data_offset": 2048, 00:12:27.996 "data_size": 63488 00:12:27.996 }, 00:12:27.996 { 00:12:27.996 "name": "BaseBdev4", 00:12:27.996 "uuid": "45f5086d-ce3c-5df0-b91d-0c6ba2a53701", 00:12:27.996 "is_configured": true, 00:12:27.996 "data_offset": 2048, 00:12:27.996 "data_size": 63488 00:12:27.996 } 00:12:27.996 ] 00:12:27.996 }' 00:12:27.996 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.996 16:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.254 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:28.254 16:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.513 [2024-12-12 16:08:54.632388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.449 "name": "raid_bdev1", 00:12:29.449 "uuid": "4d17dcb3-156e-48e3-958a-8088f8996ebc", 00:12:29.449 "strip_size_kb": 0, 00:12:29.449 "state": "online", 00:12:29.449 "raid_level": "raid1", 00:12:29.449 "superblock": true, 00:12:29.449 "num_base_bdevs": 4, 00:12:29.449 "num_base_bdevs_discovered": 4, 00:12:29.449 "num_base_bdevs_operational": 4, 00:12:29.449 "base_bdevs_list": [ 00:12:29.449 { 00:12:29.449 "name": "BaseBdev1", 00:12:29.449 "uuid": "89406b63-4688-527f-99d8-7664263206a5", 00:12:29.449 "is_configured": true, 00:12:29.449 "data_offset": 2048, 00:12:29.449 "data_size": 63488 00:12:29.449 }, 00:12:29.449 { 00:12:29.449 "name": "BaseBdev2", 00:12:29.449 "uuid": "cecef824-2796-585f-ac76-a688bb47e7d0", 00:12:29.449 "is_configured": true, 00:12:29.449 "data_offset": 2048, 00:12:29.449 "data_size": 63488 00:12:29.449 }, 00:12:29.449 { 00:12:29.449 "name": "BaseBdev3", 00:12:29.449 "uuid": "6e75f195-22a9-54c1-834f-cf85be163d73", 00:12:29.449 "is_configured": true, 00:12:29.449 "data_offset": 2048, 00:12:29.449 "data_size": 63488 00:12:29.449 }, 00:12:29.449 { 00:12:29.449 "name": "BaseBdev4", 00:12:29.449 "uuid": "45f5086d-ce3c-5df0-b91d-0c6ba2a53701", 00:12:29.449 "is_configured": true, 00:12:29.449 "data_offset": 2048, 00:12:29.449 "data_size": 63488 00:12:29.449 } 00:12:29.449 ] 00:12:29.449 }' 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.449 16:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.708 16:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.708 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.708 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.965 [2024-12-12 16:08:56.059721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.965 [2024-12-12 16:08:56.059779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.965 [2024-12-12 16:08:56.062753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.965 [2024-12-12 16:08:56.062870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.965 [2024-12-12 16:08:56.063047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.965 [2024-12-12 16:08:56.063107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:29.965 { 00:12:29.965 "results": [ 00:12:29.965 { 00:12:29.965 "job": "raid_bdev1", 00:12:29.965 "core_mask": "0x1", 00:12:29.965 "workload": "randrw", 00:12:29.965 "percentage": 50, 00:12:29.965 "status": "finished", 00:12:29.965 "queue_depth": 1, 00:12:29.965 "io_size": 131072, 00:12:29.965 "runtime": 1.428262, 00:12:29.965 "iops": 7849.400180079005, 00:12:29.965 "mibps": 981.1750225098756, 00:12:29.965 "io_failed": 0, 00:12:29.965 "io_timeout": 0, 00:12:29.965 "avg_latency_us": 124.53143656865392, 00:12:29.965 "min_latency_us": 24.593886462882097, 00:12:29.965 "max_latency_us": 1316.4436681222708 00:12:29.965 } 00:12:29.965 ], 00:12:29.965 "core_count": 1 00:12:29.965 } 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77074 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 77074 ']' 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 77074 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77074 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.965 killing process with pid 77074 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77074' 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 77074 00:12:29.965 [2024-12-12 16:08:56.105604] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.965 16:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 77074 00:12:30.224 [2024-12-12 16:08:56.456161] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DhXvwusjYX 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:31.601 00:12:31.601 real 0m4.940s 00:12:31.601 user 0m5.715s 00:12:31.601 sys 0m0.732s 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.601 16:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.601 ************************************ 00:12:31.601 END TEST raid_read_error_test 00:12:31.601 ************************************ 00:12:31.601 16:08:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:31.601 16:08:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:31.601 16:08:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.601 16:08:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.601 ************************************ 00:12:31.601 START TEST raid_write_error_test 00:12:31.601 ************************************ 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PAFxsX7YtM 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77225 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77225 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 77225 ']' 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.601 16:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.601 [2024-12-12 16:08:57.925881] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:31.601 [2024-12-12 16:08:57.926130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77225 ] 00:12:31.861 [2024-12-12 16:08:58.105789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.120 [2024-12-12 16:08:58.238345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.120 [2024-12-12 16:08:58.461398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.120 [2024-12-12 16:08:58.461552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 BaseBdev1_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 true 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 [2024-12-12 16:08:58.816350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:32.689 [2024-12-12 16:08:58.816514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.689 [2024-12-12 16:08:58.816542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:32.689 [2024-12-12 16:08:58.816557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.689 [2024-12-12 16:08:58.818956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.689 [2024-12-12 16:08:58.818999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.689 BaseBdev1 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 BaseBdev2_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 true 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 [2024-12-12 16:08:58.888739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:32.689 [2024-12-12 16:08:58.888802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.689 [2024-12-12 16:08:58.888821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:32.689 [2024-12-12 16:08:58.888834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.689 [2024-12-12 16:08:58.891162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.689 [2024-12-12 16:08:58.891207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.689 BaseBdev2 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 BaseBdev3_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 true 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.689 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.690 [2024-12-12 16:08:58.978589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:32.690 [2024-12-12 16:08:58.978648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.690 [2024-12-12 16:08:58.978669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:32.690 [2024-12-12 16:08:58.978682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.690 [2024-12-12 16:08:58.981086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.690 [2024-12-12 16:08:58.981132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:32.690 BaseBdev3 00:12:32.690 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.690 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.690 16:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:32.690 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.690 16:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.690 BaseBdev4_malloc 00:12:32.690 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.690 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:32.690 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.690 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.949 true 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.949 [2024-12-12 16:08:59.053099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:32.949 [2024-12-12 16:08:59.053242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.949 [2024-12-12 16:08:59.053267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:32.949 [2024-12-12 16:08:59.053282] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.949 [2024-12-12 16:08:59.055678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.949 [2024-12-12 16:08:59.055725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:32.949 BaseBdev4 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.949 [2024-12-12 16:08:59.065134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.949 [2024-12-12 16:08:59.067280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.949 [2024-12-12 16:08:59.067369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.949 [2024-12-12 16:08:59.067440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.949 [2024-12-12 16:08:59.067704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:32.949 [2024-12-12 16:08:59.067724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.949 [2024-12-12 16:08:59.068002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:32.949 [2024-12-12 16:08:59.068203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:32.949 [2024-12-12 16:08:59.068215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:32.949 [2024-12-12 16:08:59.068415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.949 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.949 "name": "raid_bdev1", 00:12:32.949 "uuid": "95a7cab4-7470-430e-a8e6-9e6c0d9829a6", 00:12:32.949 "strip_size_kb": 0, 00:12:32.949 "state": "online", 00:12:32.949 "raid_level": "raid1", 00:12:32.949 "superblock": true, 00:12:32.949 "num_base_bdevs": 4, 00:12:32.949 "num_base_bdevs_discovered": 4, 00:12:32.949 "num_base_bdevs_operational": 4, 00:12:32.949 "base_bdevs_list": [ 00:12:32.949 { 00:12:32.949 "name": "BaseBdev1", 00:12:32.949 "uuid": "edfa6910-c254-5ee3-8d4f-05b2738a41ad", 00:12:32.949 "is_configured": true, 00:12:32.949 "data_offset": 2048, 00:12:32.949 "data_size": 63488 00:12:32.949 }, 00:12:32.949 { 00:12:32.949 "name": "BaseBdev2", 00:12:32.949 "uuid": "fd7890ec-7f8e-5868-a5a0-0aa22acc06d9", 00:12:32.950 "is_configured": true, 00:12:32.950 "data_offset": 2048, 00:12:32.950 "data_size": 63488 00:12:32.950 }, 00:12:32.950 { 00:12:32.950 "name": "BaseBdev3", 00:12:32.950 "uuid": "f3e6dab8-bcb7-5296-bca7-a51b4c53dd4e", 00:12:32.950 "is_configured": true, 00:12:32.950 "data_offset": 2048, 00:12:32.950 "data_size": 63488 00:12:32.950 }, 00:12:32.950 { 00:12:32.950 "name": "BaseBdev4", 00:12:32.950 "uuid": "cd283b09-ea87-558f-9958-d0d1cb0d425b", 00:12:32.950 "is_configured": true, 00:12:32.950 "data_offset": 2048, 00:12:32.950 "data_size": 63488 00:12:32.950 } 00:12:32.950 ] 00:12:32.950 }' 00:12:32.950 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.950 16:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.209 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:33.209 16:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.468 [2024-12-12 16:08:59.617848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.533 [2024-12-12 16:09:00.542206] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:34.533 [2024-12-12 16:09:00.542408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.533 [2024-12-12 16:09:00.542679] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.533 "name": "raid_bdev1", 00:12:34.533 "uuid": "95a7cab4-7470-430e-a8e6-9e6c0d9829a6", 00:12:34.533 "strip_size_kb": 0, 00:12:34.533 "state": "online", 00:12:34.533 "raid_level": "raid1", 00:12:34.533 "superblock": true, 00:12:34.533 "num_base_bdevs": 4, 00:12:34.533 "num_base_bdevs_discovered": 3, 00:12:34.533 "num_base_bdevs_operational": 3, 00:12:34.533 "base_bdevs_list": [ 00:12:34.533 { 00:12:34.533 "name": null, 00:12:34.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.533 "is_configured": false, 00:12:34.533 "data_offset": 0, 00:12:34.533 "data_size": 63488 00:12:34.533 }, 00:12:34.533 { 00:12:34.533 "name": "BaseBdev2", 00:12:34.533 "uuid": "fd7890ec-7f8e-5868-a5a0-0aa22acc06d9", 00:12:34.533 "is_configured": true, 00:12:34.533 "data_offset": 2048, 00:12:34.533 "data_size": 63488 00:12:34.533 }, 00:12:34.533 { 00:12:34.533 "name": "BaseBdev3", 00:12:34.533 "uuid": "f3e6dab8-bcb7-5296-bca7-a51b4c53dd4e", 00:12:34.533 "is_configured": true, 00:12:34.533 "data_offset": 2048, 00:12:34.533 "data_size": 63488 00:12:34.533 }, 00:12:34.533 { 00:12:34.533 "name": "BaseBdev4", 00:12:34.533 "uuid": "cd283b09-ea87-558f-9958-d0d1cb0d425b", 00:12:34.533 "is_configured": true, 00:12:34.533 "data_offset": 2048, 00:12:34.533 "data_size": 63488 00:12:34.533 } 00:12:34.533 ] 00:12:34.533 }' 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.533 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.808 [2024-12-12 16:09:00.972692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.808 [2024-12-12 16:09:00.972750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.808 [2024-12-12 16:09:00.975409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.808 [2024-12-12 16:09:00.975506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.808 [2024-12-12 16:09:00.975656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.808 [2024-12-12 16:09:00.975736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:34.808 { 00:12:34.808 "results": [ 00:12:34.808 { 00:12:34.808 "job": "raid_bdev1", 00:12:34.808 "core_mask": "0x1", 00:12:34.808 "workload": "randrw", 00:12:34.808 "percentage": 50, 00:12:34.808 "status": "finished", 00:12:34.808 "queue_depth": 1, 00:12:34.808 "io_size": 131072, 00:12:34.808 "runtime": 1.355342, 00:12:34.808 "iops": 8452.479152863263, 00:12:34.808 "mibps": 1056.559894107908, 00:12:34.808 "io_failed": 0, 00:12:34.808 "io_timeout": 0, 00:12:34.808 "avg_latency_us": 115.36496791978726, 00:12:34.808 "min_latency_us": 25.041048034934498, 00:12:34.808 "max_latency_us": 1466.6899563318777 00:12:34.808 } 00:12:34.808 ], 00:12:34.808 "core_count": 1 00:12:34.808 } 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77225 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 77225 ']' 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 77225 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.808 16:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77225 00:12:34.808 16:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.808 16:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.808 killing process with pid 77225 00:12:34.808 16:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77225' 00:12:34.808 16:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 77225 00:12:34.808 [2024-12-12 16:09:01.013341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.808 16:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 77225 00:12:35.078 [2024-12-12 16:09:01.363544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.457 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PAFxsX7YtM 00:12:36.457 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:36.457 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:36.457 ************************************ 00:12:36.457 END TEST raid_write_error_test 00:12:36.457 ************************************ 00:12:36.457 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:36.457 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:36.457 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.458 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:36.458 16:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:36.458 00:12:36.458 real 0m4.826s 00:12:36.458 user 0m5.513s 00:12:36.458 sys 0m0.701s 00:12:36.458 16:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.458 16:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.458 16:09:02 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:36.458 16:09:02 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:36.458 16:09:02 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:36.458 16:09:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:36.458 16:09:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.458 16:09:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.458 ************************************ 00:12:36.458 START TEST raid_rebuild_test 00:12:36.458 ************************************ 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77363 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77363 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77363 ']' 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.458 16:09:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.458 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:36.458 Zero copy mechanism will not be used. 00:12:36.458 [2024-12-12 16:09:02.805132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:36.458 [2024-12-12 16:09:02.805254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77363 ] 00:12:36.717 [2024-12-12 16:09:02.979453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.975 [2024-12-12 16:09:03.117270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.234 [2024-12-12 16:09:03.360148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.234 [2024-12-12 16:09:03.360240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.493 BaseBdev1_malloc 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.493 [2024-12-12 16:09:03.693762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:37.493 [2024-12-12 16:09:03.693858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.493 [2024-12-12 16:09:03.693885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:37.493 [2024-12-12 16:09:03.693916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.493 [2024-12-12 16:09:03.696304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.493 [2024-12-12 16:09:03.696352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:37.493 BaseBdev1 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.493 BaseBdev2_malloc 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.493 [2024-12-12 16:09:03.753432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:37.493 [2024-12-12 16:09:03.753516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.493 [2024-12-12 16:09:03.753540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:37.493 [2024-12-12 16:09:03.753557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.493 [2024-12-12 16:09:03.756030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.493 [2024-12-12 16:09:03.756174] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.493 BaseBdev2 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.493 spare_malloc 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.493 spare_delay 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.493 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.493 [2024-12-12 16:09:03.838855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.493 [2024-12-12 16:09:03.838952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.493 [2024-12-12 16:09:03.838976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:37.493 [2024-12-12 16:09:03.838990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.493 [2024-12-12 16:09:03.841434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.493 [2024-12-12 16:09:03.841483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.752 spare 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.752 [2024-12-12 16:09:03.850909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.752 [2024-12-12 16:09:03.853035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.752 [2024-12-12 16:09:03.853138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:37.752 [2024-12-12 16:09:03.853154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:37.752 [2024-12-12 16:09:03.853408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:37.752 [2024-12-12 16:09:03.853568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:37.752 [2024-12-12 16:09:03.853580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:37.752 [2024-12-12 16:09:03.853745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.752 "name": "raid_bdev1", 00:12:37.752 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:37.752 "strip_size_kb": 0, 00:12:37.752 "state": "online", 00:12:37.752 "raid_level": "raid1", 00:12:37.752 "superblock": false, 00:12:37.752 "num_base_bdevs": 2, 00:12:37.752 "num_base_bdevs_discovered": 2, 00:12:37.752 "num_base_bdevs_operational": 2, 00:12:37.752 "base_bdevs_list": [ 00:12:37.752 { 00:12:37.752 "name": "BaseBdev1", 00:12:37.752 "uuid": "3c0fdb34-98aa-57cf-a180-6f1ec040fe2e", 00:12:37.752 "is_configured": true, 00:12:37.752 "data_offset": 0, 00:12:37.752 "data_size": 65536 00:12:37.752 }, 00:12:37.752 { 00:12:37.752 "name": "BaseBdev2", 00:12:37.752 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:37.752 "is_configured": true, 00:12:37.752 "data_offset": 0, 00:12:37.752 "data_size": 65536 00:12:37.752 } 00:12:37.752 ] 00:12:37.752 }' 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.752 16:09:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 [2024-12-12 16:09:04.310454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:38.272 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:38.272 [2024-12-12 16:09:04.593755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:38.272 /dev/nbd0 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.531 1+0 records in 00:12:38.531 1+0 records out 00:12:38.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262991 s, 15.6 MB/s 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:38.531 16:09:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:43.809 65536+0 records in 00:12:43.809 65536+0 records out 00:12:43.809 33554432 bytes (34 MB, 32 MiB) copied, 4.4257 s, 7.6 MB/s 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.809 [2024-12-12 16:09:09.305456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 [2024-12-12 16:09:09.325583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.809 "name": "raid_bdev1", 00:12:43.809 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:43.809 "strip_size_kb": 0, 00:12:43.809 "state": "online", 00:12:43.809 "raid_level": "raid1", 00:12:43.809 "superblock": false, 00:12:43.809 "num_base_bdevs": 2, 00:12:43.809 "num_base_bdevs_discovered": 1, 00:12:43.809 "num_base_bdevs_operational": 1, 00:12:43.809 "base_bdevs_list": [ 00:12:43.809 { 00:12:43.809 "name": null, 00:12:43.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.809 "is_configured": false, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 }, 00:12:43.809 { 00:12:43.809 "name": "BaseBdev2", 00:12:43.809 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:43.809 "is_configured": true, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 } 00:12:43.809 ] 00:12:43.809 }' 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 [2024-12-12 16:09:09.828793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.809 [2024-12-12 16:09:09.849161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.809 16:09:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:43.809 [2024-12-12 16:09:09.851480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.749 "name": "raid_bdev1", 00:12:44.749 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:44.749 "strip_size_kb": 0, 00:12:44.749 "state": "online", 00:12:44.749 "raid_level": "raid1", 00:12:44.749 "superblock": false, 00:12:44.749 "num_base_bdevs": 2, 00:12:44.749 "num_base_bdevs_discovered": 2, 00:12:44.749 "num_base_bdevs_operational": 2, 00:12:44.749 "process": { 00:12:44.749 "type": "rebuild", 00:12:44.749 "target": "spare", 00:12:44.749 "progress": { 00:12:44.749 "blocks": 20480, 00:12:44.749 "percent": 31 00:12:44.749 } 00:12:44.749 }, 00:12:44.749 "base_bdevs_list": [ 00:12:44.749 { 00:12:44.749 "name": "spare", 00:12:44.749 "uuid": "c4617300-c76a-5d35-b2b9-92eef5277a9e", 00:12:44.749 "is_configured": true, 00:12:44.749 "data_offset": 0, 00:12:44.749 "data_size": 65536 00:12:44.749 }, 00:12:44.749 { 00:12:44.749 "name": "BaseBdev2", 00:12:44.749 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:44.749 "is_configured": true, 00:12:44.749 "data_offset": 0, 00:12:44.749 "data_size": 65536 00:12:44.749 } 00:12:44.749 ] 00:12:44.749 }' 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.749 16:09:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 [2024-12-12 16:09:10.994656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.749 [2024-12-12 16:09:11.062581] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.749 [2024-12-12 16:09:11.062722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.749 [2024-12-12 16:09:11.062742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.749 [2024-12-12 16:09:11.062761] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.009 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.009 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.010 "name": "raid_bdev1", 00:12:45.010 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:45.010 "strip_size_kb": 0, 00:12:45.010 "state": "online", 00:12:45.010 "raid_level": "raid1", 00:12:45.010 "superblock": false, 00:12:45.010 "num_base_bdevs": 2, 00:12:45.010 "num_base_bdevs_discovered": 1, 00:12:45.010 "num_base_bdevs_operational": 1, 00:12:45.010 "base_bdevs_list": [ 00:12:45.010 { 00:12:45.010 "name": null, 00:12:45.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.010 "is_configured": false, 00:12:45.010 "data_offset": 0, 00:12:45.010 "data_size": 65536 00:12:45.010 }, 00:12:45.010 { 00:12:45.010 "name": "BaseBdev2", 00:12:45.010 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:45.010 "is_configured": true, 00:12:45.010 "data_offset": 0, 00:12:45.010 "data_size": 65536 00:12:45.010 } 00:12:45.010 ] 00:12:45.010 }' 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.010 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.269 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.270 "name": "raid_bdev1", 00:12:45.270 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:45.270 "strip_size_kb": 0, 00:12:45.270 "state": "online", 00:12:45.270 "raid_level": "raid1", 00:12:45.270 "superblock": false, 00:12:45.270 "num_base_bdevs": 2, 00:12:45.270 "num_base_bdevs_discovered": 1, 00:12:45.270 "num_base_bdevs_operational": 1, 00:12:45.270 "base_bdevs_list": [ 00:12:45.270 { 00:12:45.270 "name": null, 00:12:45.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.270 "is_configured": false, 00:12:45.270 "data_offset": 0, 00:12:45.270 "data_size": 65536 00:12:45.270 }, 00:12:45.270 { 00:12:45.270 "name": "BaseBdev2", 00:12:45.270 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:45.270 "is_configured": true, 00:12:45.270 "data_offset": 0, 00:12:45.270 "data_size": 65536 00:12:45.270 } 00:12:45.270 ] 00:12:45.270 }' 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.270 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.530 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.530 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.530 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.530 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.530 [2024-12-12 16:09:11.630178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.530 [2024-12-12 16:09:11.649190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:45.530 16:09:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.530 16:09:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:45.530 [2024-12-12 16:09:11.651511] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.468 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.468 "name": "raid_bdev1", 00:12:46.468 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:46.468 "strip_size_kb": 0, 00:12:46.468 "state": "online", 00:12:46.468 "raid_level": "raid1", 00:12:46.468 "superblock": false, 00:12:46.468 "num_base_bdevs": 2, 00:12:46.468 "num_base_bdevs_discovered": 2, 00:12:46.468 "num_base_bdevs_operational": 2, 00:12:46.468 "process": { 00:12:46.469 "type": "rebuild", 00:12:46.469 "target": "spare", 00:12:46.469 "progress": { 00:12:46.469 "blocks": 20480, 00:12:46.469 "percent": 31 00:12:46.469 } 00:12:46.469 }, 00:12:46.469 "base_bdevs_list": [ 00:12:46.469 { 00:12:46.469 "name": "spare", 00:12:46.469 "uuid": "c4617300-c76a-5d35-b2b9-92eef5277a9e", 00:12:46.469 "is_configured": true, 00:12:46.469 "data_offset": 0, 00:12:46.469 "data_size": 65536 00:12:46.469 }, 00:12:46.469 { 00:12:46.469 "name": "BaseBdev2", 00:12:46.469 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:46.469 "is_configured": true, 00:12:46.469 "data_offset": 0, 00:12:46.469 "data_size": 65536 00:12:46.469 } 00:12:46.469 ] 00:12:46.469 }' 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=380 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.469 16:09:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.728 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.728 "name": "raid_bdev1", 00:12:46.728 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:46.728 "strip_size_kb": 0, 00:12:46.728 "state": "online", 00:12:46.728 "raid_level": "raid1", 00:12:46.728 "superblock": false, 00:12:46.728 "num_base_bdevs": 2, 00:12:46.728 "num_base_bdevs_discovered": 2, 00:12:46.728 "num_base_bdevs_operational": 2, 00:12:46.728 "process": { 00:12:46.728 "type": "rebuild", 00:12:46.728 "target": "spare", 00:12:46.728 "progress": { 00:12:46.728 "blocks": 22528, 00:12:46.728 "percent": 34 00:12:46.728 } 00:12:46.728 }, 00:12:46.728 "base_bdevs_list": [ 00:12:46.728 { 00:12:46.728 "name": "spare", 00:12:46.728 "uuid": "c4617300-c76a-5d35-b2b9-92eef5277a9e", 00:12:46.728 "is_configured": true, 00:12:46.728 "data_offset": 0, 00:12:46.728 "data_size": 65536 00:12:46.728 }, 00:12:46.728 { 00:12:46.728 "name": "BaseBdev2", 00:12:46.728 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:46.728 "is_configured": true, 00:12:46.728 "data_offset": 0, 00:12:46.728 "data_size": 65536 00:12:46.729 } 00:12:46.729 ] 00:12:46.729 }' 00:12:46.729 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.729 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.729 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.729 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.729 16:09:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.667 "name": "raid_bdev1", 00:12:47.667 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:47.667 "strip_size_kb": 0, 00:12:47.667 "state": "online", 00:12:47.667 "raid_level": "raid1", 00:12:47.667 "superblock": false, 00:12:47.667 "num_base_bdevs": 2, 00:12:47.667 "num_base_bdevs_discovered": 2, 00:12:47.667 "num_base_bdevs_operational": 2, 00:12:47.667 "process": { 00:12:47.667 "type": "rebuild", 00:12:47.667 "target": "spare", 00:12:47.667 "progress": { 00:12:47.667 "blocks": 45056, 00:12:47.667 "percent": 68 00:12:47.667 } 00:12:47.667 }, 00:12:47.667 "base_bdevs_list": [ 00:12:47.667 { 00:12:47.667 "name": "spare", 00:12:47.667 "uuid": "c4617300-c76a-5d35-b2b9-92eef5277a9e", 00:12:47.667 "is_configured": true, 00:12:47.667 "data_offset": 0, 00:12:47.667 "data_size": 65536 00:12:47.667 }, 00:12:47.667 { 00:12:47.667 "name": "BaseBdev2", 00:12:47.667 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:47.667 "is_configured": true, 00:12:47.667 "data_offset": 0, 00:12:47.667 "data_size": 65536 00:12:47.667 } 00:12:47.667 ] 00:12:47.667 }' 00:12:47.667 16:09:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.927 16:09:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.927 16:09:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.927 16:09:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.927 16:09:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.866 [2024-12-12 16:09:14.879645] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:48.866 [2024-12-12 16:09:14.879902] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:48.866 [2024-12-12 16:09:14.879977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.866 "name": "raid_bdev1", 00:12:48.866 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:48.866 "strip_size_kb": 0, 00:12:48.866 "state": "online", 00:12:48.866 "raid_level": "raid1", 00:12:48.866 "superblock": false, 00:12:48.866 "num_base_bdevs": 2, 00:12:48.866 "num_base_bdevs_discovered": 2, 00:12:48.866 "num_base_bdevs_operational": 2, 00:12:48.866 "base_bdevs_list": [ 00:12:48.866 { 00:12:48.866 "name": "spare", 00:12:48.866 "uuid": "c4617300-c76a-5d35-b2b9-92eef5277a9e", 00:12:48.866 "is_configured": true, 00:12:48.866 "data_offset": 0, 00:12:48.866 "data_size": 65536 00:12:48.866 }, 00:12:48.866 { 00:12:48.866 "name": "BaseBdev2", 00:12:48.866 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:48.866 "is_configured": true, 00:12:48.866 "data_offset": 0, 00:12:48.866 "data_size": 65536 00:12:48.866 } 00:12:48.866 ] 00:12:48.866 }' 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.866 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.129 "name": "raid_bdev1", 00:12:49.129 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:49.129 "strip_size_kb": 0, 00:12:49.129 "state": "online", 00:12:49.129 "raid_level": "raid1", 00:12:49.129 "superblock": false, 00:12:49.129 "num_base_bdevs": 2, 00:12:49.129 "num_base_bdevs_discovered": 2, 00:12:49.129 "num_base_bdevs_operational": 2, 00:12:49.129 "base_bdevs_list": [ 00:12:49.129 { 00:12:49.129 "name": "spare", 00:12:49.129 "uuid": "c4617300-c76a-5d35-b2b9-92eef5277a9e", 00:12:49.129 "is_configured": true, 00:12:49.129 "data_offset": 0, 00:12:49.129 "data_size": 65536 00:12:49.129 }, 00:12:49.129 { 00:12:49.129 "name": "BaseBdev2", 00:12:49.129 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:49.129 "is_configured": true, 00:12:49.129 "data_offset": 0, 00:12:49.129 "data_size": 65536 00:12:49.129 } 00:12:49.129 ] 00:12:49.129 }' 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.129 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.130 "name": "raid_bdev1", 00:12:49.130 "uuid": "2524565c-b542-4fc7-a578-5638656f75b9", 00:12:49.130 "strip_size_kb": 0, 00:12:49.130 "state": "online", 00:12:49.130 "raid_level": "raid1", 00:12:49.130 "superblock": false, 00:12:49.130 "num_base_bdevs": 2, 00:12:49.130 "num_base_bdevs_discovered": 2, 00:12:49.130 "num_base_bdevs_operational": 2, 00:12:49.130 "base_bdevs_list": [ 00:12:49.130 { 00:12:49.130 "name": "spare", 00:12:49.130 "uuid": "c4617300-c76a-5d35-b2b9-92eef5277a9e", 00:12:49.130 "is_configured": true, 00:12:49.130 "data_offset": 0, 00:12:49.130 "data_size": 65536 00:12:49.130 }, 00:12:49.130 { 00:12:49.130 "name": "BaseBdev2", 00:12:49.130 "uuid": "ea280d0e-8a7e-575a-947d-42aca82ccab3", 00:12:49.130 "is_configured": true, 00:12:49.130 "data_offset": 0, 00:12:49.130 "data_size": 65536 00:12:49.130 } 00:12:49.130 ] 00:12:49.130 }' 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.130 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.697 [2024-12-12 16:09:15.811798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.697 [2024-12-12 16:09:15.811861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.697 [2024-12-12 16:09:15.812001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.697 [2024-12-12 16:09:15.812095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.697 [2024-12-12 16:09:15.812107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:49.697 16:09:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:49.956 /dev/nbd0 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.956 1+0 records in 00:12:49.956 1+0 records out 00:12:49.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370612 s, 11.1 MB/s 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:49.956 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:50.215 /dev/nbd1 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.215 1+0 records in 00:12:50.215 1+0 records out 00:12:50.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416341 s, 9.8 MB/s 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.215 16:09:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.473 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.474 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:50.474 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.474 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.474 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:50.733 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:50.733 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:50.733 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:50.733 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.733 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.733 16:09:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77363 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77363 ']' 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77363 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77363 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.733 killing process with pid 77363 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77363' 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77363 00:12:50.733 Received shutdown signal, test time was about 60.000000 seconds 00:12:50.733 00:12:50.733 Latency(us) 00:12:50.733 [2024-12-12T16:09:17.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.733 [2024-12-12T16:09:17.085Z] =================================================================================================================== 00:12:50.733 [2024-12-12T16:09:17.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:50.733 [2024-12-12 16:09:17.033446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.733 16:09:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77363 00:12:50.992 [2024-12-12 16:09:17.321967] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.372 ************************************ 00:12:52.372 END TEST raid_rebuild_test 00:12:52.372 ************************************ 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:52.372 00:12:52.372 real 0m15.700s 00:12:52.372 user 0m17.480s 00:12:52.372 sys 0m3.234s 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.372 16:09:18 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:52.372 16:09:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:52.372 16:09:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.372 16:09:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.372 ************************************ 00:12:52.372 START TEST raid_rebuild_test_sb 00:12:52.372 ************************************ 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:52.372 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77793 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77793 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77793 ']' 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.373 16:09:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.373 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:52.373 Zero copy mechanism will not be used. 00:12:52.373 [2024-12-12 16:09:18.556830] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:52.373 [2024-12-12 16:09:18.556956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77793 ] 00:12:52.632 [2024-12-12 16:09:18.732994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.632 [2024-12-12 16:09:18.853517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.891 [2024-12-12 16:09:19.081109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.891 [2024-12-12 16:09:19.081183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.197 BaseBdev1_malloc 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.197 [2024-12-12 16:09:19.459561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:53.197 [2024-12-12 16:09:19.459659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.197 [2024-12-12 16:09:19.459689] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:53.197 [2024-12-12 16:09:19.459702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.197 [2024-12-12 16:09:19.462245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.197 [2024-12-12 16:09:19.462292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.197 BaseBdev1 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.197 BaseBdev2_malloc 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.197 [2024-12-12 16:09:19.521023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:53.197 [2024-12-12 16:09:19.521088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.197 [2024-12-12 16:09:19.521112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:53.197 [2024-12-12 16:09:19.521126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.197 [2024-12-12 16:09:19.523626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.197 [2024-12-12 16:09:19.523668] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:53.197 BaseBdev2 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.197 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.471 spare_malloc 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.471 spare_delay 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.471 [2024-12-12 16:09:19.605948] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:53.471 [2024-12-12 16:09:19.606011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.471 [2024-12-12 16:09:19.606036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:53.471 [2024-12-12 16:09:19.606049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.471 [2024-12-12 16:09:19.608520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.471 [2024-12-12 16:09:19.608565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:53.471 spare 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.471 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.471 [2024-12-12 16:09:19.617993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.471 [2024-12-12 16:09:19.620100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.471 [2024-12-12 16:09:19.620305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.471 [2024-12-12 16:09:19.620333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:53.471 [2024-12-12 16:09:19.620627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:53.471 [2024-12-12 16:09:19.620852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.472 [2024-12-12 16:09:19.620871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:53.472 [2024-12-12 16:09:19.621056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.472 "name": "raid_bdev1", 00:12:53.472 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:12:53.472 "strip_size_kb": 0, 00:12:53.472 "state": "online", 00:12:53.472 "raid_level": "raid1", 00:12:53.472 "superblock": true, 00:12:53.472 "num_base_bdevs": 2, 00:12:53.472 "num_base_bdevs_discovered": 2, 00:12:53.472 "num_base_bdevs_operational": 2, 00:12:53.472 "base_bdevs_list": [ 00:12:53.472 { 00:12:53.472 "name": "BaseBdev1", 00:12:53.472 "uuid": "feba73d7-10bf-5a49-b0aa-ae127b8da109", 00:12:53.472 "is_configured": true, 00:12:53.472 "data_offset": 2048, 00:12:53.472 "data_size": 63488 00:12:53.472 }, 00:12:53.472 { 00:12:53.472 "name": "BaseBdev2", 00:12:53.472 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:12:53.472 "is_configured": true, 00:12:53.472 "data_offset": 2048, 00:12:53.472 "data_size": 63488 00:12:53.472 } 00:12:53.472 ] 00:12:53.472 }' 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.472 16:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.040 [2024-12-12 16:09:20.145786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.040 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:54.299 [2024-12-12 16:09:20.452703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:54.299 /dev/nbd0 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.299 1+0 records in 00:12:54.299 1+0 records out 00:12:54.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346383 s, 11.8 MB/s 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:54.299 16:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:59.577 63488+0 records in 00:12:59.577 63488+0 records out 00:12:59.577 32505856 bytes (33 MB, 31 MiB) copied, 4.36583 s, 7.4 MB/s 00:12:59.577 16:09:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.577 16:09:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.577 16:09:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.578 16:09:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.578 16:09:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:59.578 16:09:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.578 16:09:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.578 [2024-12-12 16:09:25.103111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.578 [2024-12-12 16:09:25.118843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.578 "name": "raid_bdev1", 00:12:59.578 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:12:59.578 "strip_size_kb": 0, 00:12:59.578 "state": "online", 00:12:59.578 "raid_level": "raid1", 00:12:59.578 "superblock": true, 00:12:59.578 "num_base_bdevs": 2, 00:12:59.578 "num_base_bdevs_discovered": 1, 00:12:59.578 "num_base_bdevs_operational": 1, 00:12:59.578 "base_bdevs_list": [ 00:12:59.578 { 00:12:59.578 "name": null, 00:12:59.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.578 "is_configured": false, 00:12:59.578 "data_offset": 0, 00:12:59.578 "data_size": 63488 00:12:59.578 }, 00:12:59.578 { 00:12:59.578 "name": "BaseBdev2", 00:12:59.578 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:12:59.578 "is_configured": true, 00:12:59.578 "data_offset": 2048, 00:12:59.578 "data_size": 63488 00:12:59.578 } 00:12:59.578 ] 00:12:59.578 }' 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.578 [2024-12-12 16:09:25.562085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.578 [2024-12-12 16:09:25.580800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.578 [2024-12-12 16:09:25.582965] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.578 16:09:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.515 "name": "raid_bdev1", 00:13:00.515 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:00.515 "strip_size_kb": 0, 00:13:00.515 "state": "online", 00:13:00.515 "raid_level": "raid1", 00:13:00.515 "superblock": true, 00:13:00.515 "num_base_bdevs": 2, 00:13:00.515 "num_base_bdevs_discovered": 2, 00:13:00.515 "num_base_bdevs_operational": 2, 00:13:00.515 "process": { 00:13:00.515 "type": "rebuild", 00:13:00.515 "target": "spare", 00:13:00.515 "progress": { 00:13:00.515 "blocks": 20480, 00:13:00.515 "percent": 32 00:13:00.515 } 00:13:00.515 }, 00:13:00.515 "base_bdevs_list": [ 00:13:00.515 { 00:13:00.515 "name": "spare", 00:13:00.515 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:00.515 "is_configured": true, 00:13:00.515 "data_offset": 2048, 00:13:00.515 "data_size": 63488 00:13:00.515 }, 00:13:00.515 { 00:13:00.515 "name": "BaseBdev2", 00:13:00.515 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:00.515 "is_configured": true, 00:13:00.515 "data_offset": 2048, 00:13:00.515 "data_size": 63488 00:13:00.515 } 00:13:00.515 ] 00:13:00.515 }' 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.515 [2024-12-12 16:09:26.710111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.515 [2024-12-12 16:09:26.789177] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:00.515 [2024-12-12 16:09:26.789267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.515 [2024-12-12 16:09:26.789286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.515 [2024-12-12 16:09:26.789297] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.515 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.775 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.775 "name": "raid_bdev1", 00:13:00.775 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:00.775 "strip_size_kb": 0, 00:13:00.775 "state": "online", 00:13:00.775 "raid_level": "raid1", 00:13:00.775 "superblock": true, 00:13:00.775 "num_base_bdevs": 2, 00:13:00.775 "num_base_bdevs_discovered": 1, 00:13:00.775 "num_base_bdevs_operational": 1, 00:13:00.775 "base_bdevs_list": [ 00:13:00.775 { 00:13:00.775 "name": null, 00:13:00.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.775 "is_configured": false, 00:13:00.775 "data_offset": 0, 00:13:00.775 "data_size": 63488 00:13:00.775 }, 00:13:00.775 { 00:13:00.775 "name": "BaseBdev2", 00:13:00.775 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:00.775 "is_configured": true, 00:13:00.775 "data_offset": 2048, 00:13:00.775 "data_size": 63488 00:13:00.775 } 00:13:00.775 ] 00:13:00.775 }' 00:13:00.775 16:09:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.775 16:09:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.035 "name": "raid_bdev1", 00:13:01.035 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:01.035 "strip_size_kb": 0, 00:13:01.035 "state": "online", 00:13:01.035 "raid_level": "raid1", 00:13:01.035 "superblock": true, 00:13:01.035 "num_base_bdevs": 2, 00:13:01.035 "num_base_bdevs_discovered": 1, 00:13:01.035 "num_base_bdevs_operational": 1, 00:13:01.035 "base_bdevs_list": [ 00:13:01.035 { 00:13:01.035 "name": null, 00:13:01.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.035 "is_configured": false, 00:13:01.035 "data_offset": 0, 00:13:01.035 "data_size": 63488 00:13:01.035 }, 00:13:01.035 { 00:13:01.035 "name": "BaseBdev2", 00:13:01.035 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:01.035 "is_configured": true, 00:13:01.035 "data_offset": 2048, 00:13:01.035 "data_size": 63488 00:13:01.035 } 00:13:01.035 ] 00:13:01.035 }' 00:13:01.035 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.294 [2024-12-12 16:09:27.472870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.294 [2024-12-12 16:09:27.492251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.294 16:09:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.294 [2024-12-12 16:09:27.494460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.231 "name": "raid_bdev1", 00:13:02.231 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:02.231 "strip_size_kb": 0, 00:13:02.231 "state": "online", 00:13:02.231 "raid_level": "raid1", 00:13:02.231 "superblock": true, 00:13:02.231 "num_base_bdevs": 2, 00:13:02.231 "num_base_bdevs_discovered": 2, 00:13:02.231 "num_base_bdevs_operational": 2, 00:13:02.231 "process": { 00:13:02.231 "type": "rebuild", 00:13:02.231 "target": "spare", 00:13:02.231 "progress": { 00:13:02.231 "blocks": 20480, 00:13:02.231 "percent": 32 00:13:02.231 } 00:13:02.231 }, 00:13:02.231 "base_bdevs_list": [ 00:13:02.231 { 00:13:02.231 "name": "spare", 00:13:02.231 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:02.231 "is_configured": true, 00:13:02.231 "data_offset": 2048, 00:13:02.231 "data_size": 63488 00:13:02.231 }, 00:13:02.231 { 00:13:02.231 "name": "BaseBdev2", 00:13:02.231 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:02.231 "is_configured": true, 00:13:02.231 "data_offset": 2048, 00:13:02.231 "data_size": 63488 00:13:02.231 } 00:13:02.231 ] 00:13:02.231 }' 00:13:02.231 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:02.489 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.489 16:09:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.490 "name": "raid_bdev1", 00:13:02.490 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:02.490 "strip_size_kb": 0, 00:13:02.490 "state": "online", 00:13:02.490 "raid_level": "raid1", 00:13:02.490 "superblock": true, 00:13:02.490 "num_base_bdevs": 2, 00:13:02.490 "num_base_bdevs_discovered": 2, 00:13:02.490 "num_base_bdevs_operational": 2, 00:13:02.490 "process": { 00:13:02.490 "type": "rebuild", 00:13:02.490 "target": "spare", 00:13:02.490 "progress": { 00:13:02.490 "blocks": 22528, 00:13:02.490 "percent": 35 00:13:02.490 } 00:13:02.490 }, 00:13:02.490 "base_bdevs_list": [ 00:13:02.490 { 00:13:02.490 "name": "spare", 00:13:02.490 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:02.490 "is_configured": true, 00:13:02.490 "data_offset": 2048, 00:13:02.490 "data_size": 63488 00:13:02.490 }, 00:13:02.490 { 00:13:02.490 "name": "BaseBdev2", 00:13:02.490 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:02.490 "is_configured": true, 00:13:02.490 "data_offset": 2048, 00:13:02.490 "data_size": 63488 00:13:02.490 } 00:13:02.490 ] 00:13:02.490 }' 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.490 16:09:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.869 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.869 "name": "raid_bdev1", 00:13:03.869 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:03.869 "strip_size_kb": 0, 00:13:03.870 "state": "online", 00:13:03.870 "raid_level": "raid1", 00:13:03.870 "superblock": true, 00:13:03.870 "num_base_bdevs": 2, 00:13:03.870 "num_base_bdevs_discovered": 2, 00:13:03.870 "num_base_bdevs_operational": 2, 00:13:03.870 "process": { 00:13:03.870 "type": "rebuild", 00:13:03.870 "target": "spare", 00:13:03.870 "progress": { 00:13:03.870 "blocks": 47104, 00:13:03.870 "percent": 74 00:13:03.870 } 00:13:03.870 }, 00:13:03.870 "base_bdevs_list": [ 00:13:03.870 { 00:13:03.870 "name": "spare", 00:13:03.870 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:03.870 "is_configured": true, 00:13:03.870 "data_offset": 2048, 00:13:03.870 "data_size": 63488 00:13:03.870 }, 00:13:03.870 { 00:13:03.870 "name": "BaseBdev2", 00:13:03.870 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:03.870 "is_configured": true, 00:13:03.870 "data_offset": 2048, 00:13:03.870 "data_size": 63488 00:13:03.870 } 00:13:03.870 ] 00:13:03.870 }' 00:13:03.870 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.870 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.870 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.870 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.870 16:09:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.438 [2024-12-12 16:09:30.609528] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.438 [2024-12-12 16:09:30.609716] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.438 [2024-12-12 16:09:30.609847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.698 16:09:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.698 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.698 "name": "raid_bdev1", 00:13:04.698 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:04.698 "strip_size_kb": 0, 00:13:04.698 "state": "online", 00:13:04.698 "raid_level": "raid1", 00:13:04.698 "superblock": true, 00:13:04.698 "num_base_bdevs": 2, 00:13:04.698 "num_base_bdevs_discovered": 2, 00:13:04.698 "num_base_bdevs_operational": 2, 00:13:04.698 "base_bdevs_list": [ 00:13:04.698 { 00:13:04.698 "name": "spare", 00:13:04.698 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:04.698 "is_configured": true, 00:13:04.698 "data_offset": 2048, 00:13:04.698 "data_size": 63488 00:13:04.698 }, 00:13:04.698 { 00:13:04.698 "name": "BaseBdev2", 00:13:04.698 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:04.698 "is_configured": true, 00:13:04.698 "data_offset": 2048, 00:13:04.698 "data_size": 63488 00:13:04.698 } 00:13:04.698 ] 00:13:04.698 }' 00:13:04.698 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.957 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:04.957 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.958 "name": "raid_bdev1", 00:13:04.958 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:04.958 "strip_size_kb": 0, 00:13:04.958 "state": "online", 00:13:04.958 "raid_level": "raid1", 00:13:04.958 "superblock": true, 00:13:04.958 "num_base_bdevs": 2, 00:13:04.958 "num_base_bdevs_discovered": 2, 00:13:04.958 "num_base_bdevs_operational": 2, 00:13:04.958 "base_bdevs_list": [ 00:13:04.958 { 00:13:04.958 "name": "spare", 00:13:04.958 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:04.958 "is_configured": true, 00:13:04.958 "data_offset": 2048, 00:13:04.958 "data_size": 63488 00:13:04.958 }, 00:13:04.958 { 00:13:04.958 "name": "BaseBdev2", 00:13:04.958 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:04.958 "is_configured": true, 00:13:04.958 "data_offset": 2048, 00:13:04.958 "data_size": 63488 00:13:04.958 } 00:13:04.958 ] 00:13:04.958 }' 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.958 "name": "raid_bdev1", 00:13:04.958 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:04.958 "strip_size_kb": 0, 00:13:04.958 "state": "online", 00:13:04.958 "raid_level": "raid1", 00:13:04.958 "superblock": true, 00:13:04.958 "num_base_bdevs": 2, 00:13:04.958 "num_base_bdevs_discovered": 2, 00:13:04.958 "num_base_bdevs_operational": 2, 00:13:04.958 "base_bdevs_list": [ 00:13:04.958 { 00:13:04.958 "name": "spare", 00:13:04.958 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:04.958 "is_configured": true, 00:13:04.958 "data_offset": 2048, 00:13:04.958 "data_size": 63488 00:13:04.958 }, 00:13:04.958 { 00:13:04.958 "name": "BaseBdev2", 00:13:04.958 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:04.958 "is_configured": true, 00:13:04.958 "data_offset": 2048, 00:13:04.958 "data_size": 63488 00:13:04.958 } 00:13:04.958 ] 00:13:04.958 }' 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.958 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.526 [2024-12-12 16:09:31.709391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.526 [2024-12-12 16:09:31.709428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.526 [2024-12-12 16:09:31.709546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.526 [2024-12-12 16:09:31.709625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.526 [2024-12-12 16:09:31.709638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.526 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.527 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:05.786 /dev/nbd0 00:13:05.786 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.786 16:09:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.786 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:05.786 16:09:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.786 1+0 records in 00:13:05.786 1+0 records out 00:13:05.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399811 s, 10.2 MB/s 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.786 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:06.045 /dev/nbd1 00:13:06.045 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.045 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.045 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:06.045 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:06.045 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.045 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.046 1+0 records in 00:13:06.046 1+0 records out 00:13:06.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383666 s, 10.7 MB/s 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.046 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:06.305 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:06.305 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.305 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.305 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.305 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:06.305 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.305 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.565 [2024-12-12 16:09:32.901300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.565 [2024-12-12 16:09:32.901353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.565 [2024-12-12 16:09:32.901381] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:06.565 [2024-12-12 16:09:32.901393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.565 [2024-12-12 16:09:32.903868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.565 [2024-12-12 16:09:32.903915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.565 [2024-12-12 16:09:32.904035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:06.565 [2024-12-12 16:09:32.904108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.565 [2024-12-12 16:09:32.904296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.565 spare 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.565 16:09:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.825 [2024-12-12 16:09:33.004222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:06.825 [2024-12-12 16:09:33.004268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.825 [2024-12-12 16:09:33.004602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:06.825 [2024-12-12 16:09:33.004833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:06.825 [2024-12-12 16:09:33.004850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:06.825 [2024-12-12 16:09:33.005104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.825 "name": "raid_bdev1", 00:13:06.825 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:06.825 "strip_size_kb": 0, 00:13:06.825 "state": "online", 00:13:06.825 "raid_level": "raid1", 00:13:06.825 "superblock": true, 00:13:06.825 "num_base_bdevs": 2, 00:13:06.825 "num_base_bdevs_discovered": 2, 00:13:06.825 "num_base_bdevs_operational": 2, 00:13:06.825 "base_bdevs_list": [ 00:13:06.825 { 00:13:06.825 "name": "spare", 00:13:06.825 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:06.825 "is_configured": true, 00:13:06.825 "data_offset": 2048, 00:13:06.825 "data_size": 63488 00:13:06.825 }, 00:13:06.825 { 00:13:06.825 "name": "BaseBdev2", 00:13:06.825 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:06.825 "is_configured": true, 00:13:06.825 "data_offset": 2048, 00:13:06.825 "data_size": 63488 00:13:06.825 } 00:13:06.825 ] 00:13:06.825 }' 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.825 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.394 "name": "raid_bdev1", 00:13:07.394 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:07.394 "strip_size_kb": 0, 00:13:07.394 "state": "online", 00:13:07.394 "raid_level": "raid1", 00:13:07.394 "superblock": true, 00:13:07.394 "num_base_bdevs": 2, 00:13:07.394 "num_base_bdevs_discovered": 2, 00:13:07.394 "num_base_bdevs_operational": 2, 00:13:07.394 "base_bdevs_list": [ 00:13:07.394 { 00:13:07.394 "name": "spare", 00:13:07.394 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:07.394 "is_configured": true, 00:13:07.394 "data_offset": 2048, 00:13:07.394 "data_size": 63488 00:13:07.394 }, 00:13:07.394 { 00:13:07.394 "name": "BaseBdev2", 00:13:07.394 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:07.394 "is_configured": true, 00:13:07.394 "data_offset": 2048, 00:13:07.394 "data_size": 63488 00:13:07.394 } 00:13:07.394 ] 00:13:07.394 }' 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.394 [2024-12-12 16:09:33.644105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.394 "name": "raid_bdev1", 00:13:07.394 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:07.394 "strip_size_kb": 0, 00:13:07.394 "state": "online", 00:13:07.394 "raid_level": "raid1", 00:13:07.394 "superblock": true, 00:13:07.394 "num_base_bdevs": 2, 00:13:07.394 "num_base_bdevs_discovered": 1, 00:13:07.394 "num_base_bdevs_operational": 1, 00:13:07.394 "base_bdevs_list": [ 00:13:07.394 { 00:13:07.394 "name": null, 00:13:07.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.394 "is_configured": false, 00:13:07.394 "data_offset": 0, 00:13:07.394 "data_size": 63488 00:13:07.394 }, 00:13:07.394 { 00:13:07.394 "name": "BaseBdev2", 00:13:07.394 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:07.394 "is_configured": true, 00:13:07.394 "data_offset": 2048, 00:13:07.394 "data_size": 63488 00:13:07.394 } 00:13:07.394 ] 00:13:07.394 }' 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.394 16:09:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.963 16:09:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.964 16:09:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.964 16:09:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.964 [2024-12-12 16:09:34.135324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.964 [2024-12-12 16:09:34.135537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:07.964 [2024-12-12 16:09:34.135561] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:07.964 [2024-12-12 16:09:34.135600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.964 [2024-12-12 16:09:34.152530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:07.964 16:09:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.964 16:09:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:07.964 [2024-12-12 16:09:34.154440] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.901 "name": "raid_bdev1", 00:13:08.901 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:08.901 "strip_size_kb": 0, 00:13:08.901 "state": "online", 00:13:08.901 "raid_level": "raid1", 00:13:08.901 "superblock": true, 00:13:08.901 "num_base_bdevs": 2, 00:13:08.901 "num_base_bdevs_discovered": 2, 00:13:08.901 "num_base_bdevs_operational": 2, 00:13:08.901 "process": { 00:13:08.901 "type": "rebuild", 00:13:08.901 "target": "spare", 00:13:08.901 "progress": { 00:13:08.901 "blocks": 20480, 00:13:08.901 "percent": 32 00:13:08.901 } 00:13:08.901 }, 00:13:08.901 "base_bdevs_list": [ 00:13:08.901 { 00:13:08.901 "name": "spare", 00:13:08.901 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:08.901 "is_configured": true, 00:13:08.901 "data_offset": 2048, 00:13:08.901 "data_size": 63488 00:13:08.901 }, 00:13:08.901 { 00:13:08.901 "name": "BaseBdev2", 00:13:08.901 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:08.901 "is_configured": true, 00:13:08.901 "data_offset": 2048, 00:13:08.901 "data_size": 63488 00:13:08.901 } 00:13:08.901 ] 00:13:08.901 }' 00:13:08.901 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.160 [2024-12-12 16:09:35.293795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.160 [2024-12-12 16:09:35.360105] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:09.160 [2024-12-12 16:09:35.360178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.160 [2024-12-12 16:09:35.360195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.160 [2024-12-12 16:09:35.360205] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.160 "name": "raid_bdev1", 00:13:09.160 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:09.160 "strip_size_kb": 0, 00:13:09.160 "state": "online", 00:13:09.160 "raid_level": "raid1", 00:13:09.160 "superblock": true, 00:13:09.160 "num_base_bdevs": 2, 00:13:09.160 "num_base_bdevs_discovered": 1, 00:13:09.160 "num_base_bdevs_operational": 1, 00:13:09.160 "base_bdevs_list": [ 00:13:09.160 { 00:13:09.160 "name": null, 00:13:09.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.160 "is_configured": false, 00:13:09.160 "data_offset": 0, 00:13:09.160 "data_size": 63488 00:13:09.160 }, 00:13:09.160 { 00:13:09.160 "name": "BaseBdev2", 00:13:09.160 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:09.160 "is_configured": true, 00:13:09.160 "data_offset": 2048, 00:13:09.160 "data_size": 63488 00:13:09.160 } 00:13:09.160 ] 00:13:09.160 }' 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.160 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.741 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.741 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.741 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.741 [2024-12-12 16:09:35.845940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.741 [2024-12-12 16:09:35.846008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.741 [2024-12-12 16:09:35.846034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:09.741 [2024-12-12 16:09:35.846058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.741 [2024-12-12 16:09:35.846625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.741 [2024-12-12 16:09:35.846669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.741 [2024-12-12 16:09:35.846775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:09.741 [2024-12-12 16:09:35.846791] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:09.741 [2024-12-12 16:09:35.846802] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:09.741 [2024-12-12 16:09:35.846836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.741 [2024-12-12 16:09:35.864981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:09.741 spare 00:13:09.741 16:09:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.741 16:09:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:09.741 [2024-12-12 16:09:35.867046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.703 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.703 "name": "raid_bdev1", 00:13:10.703 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:10.703 "strip_size_kb": 0, 00:13:10.703 "state": "online", 00:13:10.703 "raid_level": "raid1", 00:13:10.703 "superblock": true, 00:13:10.703 "num_base_bdevs": 2, 00:13:10.703 "num_base_bdevs_discovered": 2, 00:13:10.703 "num_base_bdevs_operational": 2, 00:13:10.703 "process": { 00:13:10.703 "type": "rebuild", 00:13:10.703 "target": "spare", 00:13:10.703 "progress": { 00:13:10.703 "blocks": 20480, 00:13:10.703 "percent": 32 00:13:10.703 } 00:13:10.703 }, 00:13:10.703 "base_bdevs_list": [ 00:13:10.704 { 00:13:10.704 "name": "spare", 00:13:10.704 "uuid": "7ddf9d9c-62f8-5070-84a8-9480830f14e1", 00:13:10.704 "is_configured": true, 00:13:10.704 "data_offset": 2048, 00:13:10.704 "data_size": 63488 00:13:10.704 }, 00:13:10.704 { 00:13:10.704 "name": "BaseBdev2", 00:13:10.704 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:10.704 "is_configured": true, 00:13:10.704 "data_offset": 2048, 00:13:10.704 "data_size": 63488 00:13:10.704 } 00:13:10.704 ] 00:13:10.704 }' 00:13:10.704 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.704 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.704 16:09:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.704 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.704 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.704 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.704 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.704 [2024-12-12 16:09:37.010685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.963 [2024-12-12 16:09:37.073002] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.963 [2024-12-12 16:09:37.073147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.963 [2024-12-12 16:09:37.073169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.963 [2024-12-12 16:09:37.073177] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.963 "name": "raid_bdev1", 00:13:10.963 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:10.963 "strip_size_kb": 0, 00:13:10.963 "state": "online", 00:13:10.963 "raid_level": "raid1", 00:13:10.963 "superblock": true, 00:13:10.963 "num_base_bdevs": 2, 00:13:10.963 "num_base_bdevs_discovered": 1, 00:13:10.963 "num_base_bdevs_operational": 1, 00:13:10.963 "base_bdevs_list": [ 00:13:10.963 { 00:13:10.963 "name": null, 00:13:10.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.963 "is_configured": false, 00:13:10.963 "data_offset": 0, 00:13:10.963 "data_size": 63488 00:13:10.963 }, 00:13:10.963 { 00:13:10.963 "name": "BaseBdev2", 00:13:10.963 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:10.963 "is_configured": true, 00:13:10.963 "data_offset": 2048, 00:13:10.963 "data_size": 63488 00:13:10.963 } 00:13:10.963 ] 00:13:10.963 }' 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.963 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.224 "name": "raid_bdev1", 00:13:11.224 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:11.224 "strip_size_kb": 0, 00:13:11.224 "state": "online", 00:13:11.224 "raid_level": "raid1", 00:13:11.224 "superblock": true, 00:13:11.224 "num_base_bdevs": 2, 00:13:11.224 "num_base_bdevs_discovered": 1, 00:13:11.224 "num_base_bdevs_operational": 1, 00:13:11.224 "base_bdevs_list": [ 00:13:11.224 { 00:13:11.224 "name": null, 00:13:11.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.224 "is_configured": false, 00:13:11.224 "data_offset": 0, 00:13:11.224 "data_size": 63488 00:13:11.224 }, 00:13:11.224 { 00:13:11.224 "name": "BaseBdev2", 00:13:11.224 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:11.224 "is_configured": true, 00:13:11.224 "data_offset": 2048, 00:13:11.224 "data_size": 63488 00:13:11.224 } 00:13:11.224 ] 00:13:11.224 }' 00:13:11.224 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.484 [2024-12-12 16:09:37.689758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.484 [2024-12-12 16:09:37.689826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.484 [2024-12-12 16:09:37.689850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:11.484 [2024-12-12 16:09:37.689870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.484 [2024-12-12 16:09:37.690389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.484 [2024-12-12 16:09:37.690408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.484 [2024-12-12 16:09:37.690502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:11.484 [2024-12-12 16:09:37.690517] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:11.484 [2024-12-12 16:09:37.690531] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:11.484 [2024-12-12 16:09:37.690541] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:11.484 BaseBdev1 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.484 16:09:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:12.418 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.418 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.418 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.418 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.418 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.419 "name": "raid_bdev1", 00:13:12.419 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:12.419 "strip_size_kb": 0, 00:13:12.419 "state": "online", 00:13:12.419 "raid_level": "raid1", 00:13:12.419 "superblock": true, 00:13:12.419 "num_base_bdevs": 2, 00:13:12.419 "num_base_bdevs_discovered": 1, 00:13:12.419 "num_base_bdevs_operational": 1, 00:13:12.419 "base_bdevs_list": [ 00:13:12.419 { 00:13:12.419 "name": null, 00:13:12.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.419 "is_configured": false, 00:13:12.419 "data_offset": 0, 00:13:12.419 "data_size": 63488 00:13:12.419 }, 00:13:12.419 { 00:13:12.419 "name": "BaseBdev2", 00:13:12.419 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:12.419 "is_configured": true, 00:13:12.419 "data_offset": 2048, 00:13:12.419 "data_size": 63488 00:13:12.419 } 00:13:12.419 ] 00:13:12.419 }' 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.419 16:09:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.987 "name": "raid_bdev1", 00:13:12.987 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:12.987 "strip_size_kb": 0, 00:13:12.987 "state": "online", 00:13:12.987 "raid_level": "raid1", 00:13:12.987 "superblock": true, 00:13:12.987 "num_base_bdevs": 2, 00:13:12.987 "num_base_bdevs_discovered": 1, 00:13:12.987 "num_base_bdevs_operational": 1, 00:13:12.987 "base_bdevs_list": [ 00:13:12.987 { 00:13:12.987 "name": null, 00:13:12.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.987 "is_configured": false, 00:13:12.987 "data_offset": 0, 00:13:12.987 "data_size": 63488 00:13:12.987 }, 00:13:12.987 { 00:13:12.987 "name": "BaseBdev2", 00:13:12.987 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:12.987 "is_configured": true, 00:13:12.987 "data_offset": 2048, 00:13:12.987 "data_size": 63488 00:13:12.987 } 00:13:12.987 ] 00:13:12.987 }' 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.987 [2024-12-12 16:09:39.299098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.987 [2024-12-12 16:09:39.299277] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:12.987 [2024-12-12 16:09:39.299294] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:12.987 request: 00:13:12.987 { 00:13:12.987 "base_bdev": "BaseBdev1", 00:13:12.987 "raid_bdev": "raid_bdev1", 00:13:12.987 "method": "bdev_raid_add_base_bdev", 00:13:12.987 "req_id": 1 00:13:12.987 } 00:13:12.987 Got JSON-RPC error response 00:13:12.987 response: 00:13:12.987 { 00:13:12.987 "code": -22, 00:13:12.987 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:12.987 } 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:12.987 16:09:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.368 "name": "raid_bdev1", 00:13:14.368 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:14.368 "strip_size_kb": 0, 00:13:14.368 "state": "online", 00:13:14.368 "raid_level": "raid1", 00:13:14.368 "superblock": true, 00:13:14.368 "num_base_bdevs": 2, 00:13:14.368 "num_base_bdevs_discovered": 1, 00:13:14.368 "num_base_bdevs_operational": 1, 00:13:14.368 "base_bdevs_list": [ 00:13:14.368 { 00:13:14.368 "name": null, 00:13:14.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.368 "is_configured": false, 00:13:14.368 "data_offset": 0, 00:13:14.368 "data_size": 63488 00:13:14.368 }, 00:13:14.368 { 00:13:14.368 "name": "BaseBdev2", 00:13:14.368 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:14.368 "is_configured": true, 00:13:14.368 "data_offset": 2048, 00:13:14.368 "data_size": 63488 00:13:14.368 } 00:13:14.368 ] 00:13:14.368 }' 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.368 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.628 "name": "raid_bdev1", 00:13:14.628 "uuid": "710dd61e-c78c-4a25-8ff2-b97fede60059", 00:13:14.628 "strip_size_kb": 0, 00:13:14.628 "state": "online", 00:13:14.628 "raid_level": "raid1", 00:13:14.628 "superblock": true, 00:13:14.628 "num_base_bdevs": 2, 00:13:14.628 "num_base_bdevs_discovered": 1, 00:13:14.628 "num_base_bdevs_operational": 1, 00:13:14.628 "base_bdevs_list": [ 00:13:14.628 { 00:13:14.628 "name": null, 00:13:14.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.628 "is_configured": false, 00:13:14.628 "data_offset": 0, 00:13:14.628 "data_size": 63488 00:13:14.628 }, 00:13:14.628 { 00:13:14.628 "name": "BaseBdev2", 00:13:14.628 "uuid": "0faa6841-1c85-5854-82fe-46eb5d537b69", 00:13:14.628 "is_configured": true, 00:13:14.628 "data_offset": 2048, 00:13:14.628 "data_size": 63488 00:13:14.628 } 00:13:14.628 ] 00:13:14.628 }' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77793 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77793 ']' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77793 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77793 00:13:14.628 killing process with pid 77793 00:13:14.628 Received shutdown signal, test time was about 60.000000 seconds 00:13:14.628 00:13:14.628 Latency(us) 00:13:14.628 [2024-12-12T16:09:40.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.628 [2024-12-12T16:09:40.980Z] =================================================================================================================== 00:13:14.628 [2024-12-12T16:09:40.980Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77793' 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77793 00:13:14.628 [2024-12-12 16:09:40.928661] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.628 16:09:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77793 00:13:14.628 [2024-12-12 16:09:40.928846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.628 [2024-12-12 16:09:40.928937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.628 [2024-12-12 16:09:40.928956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:15.197 [2024-12-12 16:09:41.259817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.137 16:09:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:16.137 00:13:16.137 real 0m24.016s 00:13:16.137 user 0m29.335s 00:13:16.137 sys 0m3.726s 00:13:16.137 16:09:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.137 ************************************ 00:13:16.137 END TEST raid_rebuild_test_sb 00:13:16.137 ************************************ 00:13:16.137 16:09:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.397 16:09:42 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:16.397 16:09:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:16.397 16:09:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.397 16:09:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.397 ************************************ 00:13:16.397 START TEST raid_rebuild_test_io 00:13:16.397 ************************************ 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78524 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78524 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78524 ']' 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.397 16:09:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.397 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.397 Zero copy mechanism will not be used. 00:13:16.397 [2024-12-12 16:09:42.636587] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:16.397 [2024-12-12 16:09:42.636704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78524 ] 00:13:16.656 [2024-12-12 16:09:42.808880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.656 [2024-12-12 16:09:42.944842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.916 [2024-12-12 16:09:43.179935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.916 [2024-12-12 16:09:43.180012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.176 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.176 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:17.176 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.176 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.176 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.176 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 BaseBdev1_malloc 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 [2024-12-12 16:09:43.556113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:17.436 [2024-12-12 16:09:43.556307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.436 [2024-12-12 16:09:43.556341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:17.436 [2024-12-12 16:09:43.556356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.436 [2024-12-12 16:09:43.558720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.436 [2024-12-12 16:09:43.558772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.436 BaseBdev1 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 BaseBdev2_malloc 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 [2024-12-12 16:09:43.613580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:17.436 [2024-12-12 16:09:43.613654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.436 [2024-12-12 16:09:43.613677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:17.436 [2024-12-12 16:09:43.613694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.436 [2024-12-12 16:09:43.616129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.436 [2024-12-12 16:09:43.616174] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.436 BaseBdev2 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 spare_malloc 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 spare_delay 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 [2024-12-12 16:09:43.698365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:17.436 [2024-12-12 16:09:43.698439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.436 [2024-12-12 16:09:43.698460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:17.436 [2024-12-12 16:09:43.698474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.436 [2024-12-12 16:09:43.700945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.436 [2024-12-12 16:09:43.700992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:17.436 spare 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 [2024-12-12 16:09:43.710403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.436 [2024-12-12 16:09:43.712476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.436 [2024-12-12 16:09:43.712672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:17.436 [2024-12-12 16:09:43.712693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:17.436 [2024-12-12 16:09:43.712972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:17.436 [2024-12-12 16:09:43.713155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:17.436 [2024-12-12 16:09:43.713168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:17.436 [2024-12-12 16:09:43.713337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.436 "name": "raid_bdev1", 00:13:17.436 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:17.436 "strip_size_kb": 0, 00:13:17.436 "state": "online", 00:13:17.436 "raid_level": "raid1", 00:13:17.436 "superblock": false, 00:13:17.436 "num_base_bdevs": 2, 00:13:17.436 "num_base_bdevs_discovered": 2, 00:13:17.436 "num_base_bdevs_operational": 2, 00:13:17.436 "base_bdevs_list": [ 00:13:17.436 { 00:13:17.436 "name": "BaseBdev1", 00:13:17.436 "uuid": "b4f097c8-2baa-5c04-a35e-266659e479aa", 00:13:17.436 "is_configured": true, 00:13:17.436 "data_offset": 0, 00:13:17.436 "data_size": 65536 00:13:17.436 }, 00:13:17.436 { 00:13:17.436 "name": "BaseBdev2", 00:13:17.436 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:17.436 "is_configured": true, 00:13:17.436 "data_offset": 0, 00:13:17.436 "data_size": 65536 00:13:17.436 } 00:13:17.437 ] 00:13:17.437 }' 00:13:17.437 16:09:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.437 16:09:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.005 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 [2024-12-12 16:09:44.109993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 [2024-12-12 16:09:44.205492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.006 "name": "raid_bdev1", 00:13:18.006 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:18.006 "strip_size_kb": 0, 00:13:18.006 "state": "online", 00:13:18.006 "raid_level": "raid1", 00:13:18.006 "superblock": false, 00:13:18.006 "num_base_bdevs": 2, 00:13:18.006 "num_base_bdevs_discovered": 1, 00:13:18.006 "num_base_bdevs_operational": 1, 00:13:18.006 "base_bdevs_list": [ 00:13:18.006 { 00:13:18.006 "name": null, 00:13:18.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.006 "is_configured": false, 00:13:18.006 "data_offset": 0, 00:13:18.006 "data_size": 65536 00:13:18.006 }, 00:13:18.006 { 00:13:18.006 "name": "BaseBdev2", 00:13:18.006 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:18.006 "is_configured": true, 00:13:18.006 "data_offset": 0, 00:13:18.006 "data_size": 65536 00:13:18.006 } 00:13:18.006 ] 00:13:18.006 }' 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.006 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 [2024-12-12 16:09:44.307203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:18.006 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.006 Zero copy mechanism will not be used. 00:13:18.006 Running I/O for 60 seconds... 00:13:18.266 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.266 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.266 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.266 [2024-12-12 16:09:44.605304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.526 16:09:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.526 16:09:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:18.526 [2024-12-12 16:09:44.666578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:18.526 [2024-12-12 16:09:44.668884] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.526 [2024-12-12 16:09:44.777383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.526 [2024-12-12 16:09:44.778093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.789 [2024-12-12 16:09:44.994055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.789 [2024-12-12 16:09:44.994702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.050 170.00 IOPS, 510.00 MiB/s [2024-12-12T16:09:45.402Z] [2024-12-12 16:09:45.334433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.050 [2024-12-12 16:09:45.335343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.310 [2024-12-12 16:09:45.483729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.310 16:09:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.570 "name": "raid_bdev1", 00:13:19.570 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:19.570 "strip_size_kb": 0, 00:13:19.570 "state": "online", 00:13:19.570 "raid_level": "raid1", 00:13:19.570 "superblock": false, 00:13:19.570 "num_base_bdevs": 2, 00:13:19.570 "num_base_bdevs_discovered": 2, 00:13:19.570 "num_base_bdevs_operational": 2, 00:13:19.570 "process": { 00:13:19.570 "type": "rebuild", 00:13:19.570 "target": "spare", 00:13:19.570 "progress": { 00:13:19.570 "blocks": 12288, 00:13:19.570 "percent": 18 00:13:19.570 } 00:13:19.570 }, 00:13:19.570 "base_bdevs_list": [ 00:13:19.570 { 00:13:19.570 "name": "spare", 00:13:19.570 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:19.570 "is_configured": true, 00:13:19.570 "data_offset": 0, 00:13:19.570 "data_size": 65536 00:13:19.570 }, 00:13:19.570 { 00:13:19.570 "name": "BaseBdev2", 00:13:19.570 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:19.570 "is_configured": true, 00:13:19.570 "data_offset": 0, 00:13:19.570 "data_size": 65536 00:13:19.570 } 00:13:19.570 ] 00:13:19.570 }' 00:13:19.570 [2024-12-12 16:09:45.707020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.570 16:09:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 [2024-12-12 16:09:45.815283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.570 [2024-12-12 16:09:45.815477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:19.570 [2024-12-12 16:09:45.815996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:19.830 [2024-12-12 16:09:45.924104] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.830 [2024-12-12 16:09:45.940064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.830 [2024-12-12 16:09:45.940142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.830 [2024-12-12 16:09:45.940167] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.830 [2024-12-12 16:09:45.985593] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:19.830 16:09:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.830 16:09:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.830 "name": "raid_bdev1", 00:13:19.830 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:19.830 "strip_size_kb": 0, 00:13:19.830 "state": "online", 00:13:19.830 "raid_level": "raid1", 00:13:19.830 "superblock": false, 00:13:19.830 "num_base_bdevs": 2, 00:13:19.830 "num_base_bdevs_discovered": 1, 00:13:19.830 "num_base_bdevs_operational": 1, 00:13:19.830 "base_bdevs_list": [ 00:13:19.830 { 00:13:19.830 "name": null, 00:13:19.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.830 "is_configured": false, 00:13:19.830 "data_offset": 0, 00:13:19.830 "data_size": 65536 00:13:19.830 }, 00:13:19.830 { 00:13:19.830 "name": "BaseBdev2", 00:13:19.830 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:19.830 "is_configured": true, 00:13:19.830 "data_offset": 0, 00:13:19.830 "data_size": 65536 00:13:19.830 } 00:13:19.830 ] 00:13:19.830 }' 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.830 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 162.50 IOPS, 487.50 MiB/s [2024-12-12T16:09:46.442Z] 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.349 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.349 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.349 "name": "raid_bdev1", 00:13:20.349 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:20.349 "strip_size_kb": 0, 00:13:20.349 "state": "online", 00:13:20.349 "raid_level": "raid1", 00:13:20.349 "superblock": false, 00:13:20.349 "num_base_bdevs": 2, 00:13:20.349 "num_base_bdevs_discovered": 1, 00:13:20.349 "num_base_bdevs_operational": 1, 00:13:20.349 "base_bdevs_list": [ 00:13:20.349 { 00:13:20.349 "name": null, 00:13:20.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.349 "is_configured": false, 00:13:20.349 "data_offset": 0, 00:13:20.349 "data_size": 65536 00:13:20.349 }, 00:13:20.349 { 00:13:20.349 "name": "BaseBdev2", 00:13:20.349 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:20.350 "is_configured": true, 00:13:20.350 "data_offset": 0, 00:13:20.350 "data_size": 65536 00:13:20.350 } 00:13:20.350 ] 00:13:20.350 }' 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.350 [2024-12-12 16:09:46.571748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.350 16:09:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:20.350 [2024-12-12 16:09:46.631730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:20.350 [2024-12-12 16:09:46.634083] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.609 [2024-12-12 16:09:46.754803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:20.609 [2024-12-12 16:09:46.755796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:20.868 [2024-12-12 16:09:46.985163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:20.868 [2024-12-12 16:09:46.985825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:21.127 168.33 IOPS, 505.00 MiB/s [2024-12-12T16:09:47.479Z] [2024-12-12 16:09:47.320256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:21.386 [2024-12-12 16:09:47.596075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.386 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.386 "name": "raid_bdev1", 00:13:21.386 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:21.386 "strip_size_kb": 0, 00:13:21.386 "state": "online", 00:13:21.386 "raid_level": "raid1", 00:13:21.386 "superblock": false, 00:13:21.386 "num_base_bdevs": 2, 00:13:21.386 "num_base_bdevs_discovered": 2, 00:13:21.386 "num_base_bdevs_operational": 2, 00:13:21.386 "process": { 00:13:21.386 "type": "rebuild", 00:13:21.386 "target": "spare", 00:13:21.386 "progress": { 00:13:21.386 "blocks": 10240, 00:13:21.386 "percent": 15 00:13:21.386 } 00:13:21.386 }, 00:13:21.386 "base_bdevs_list": [ 00:13:21.386 { 00:13:21.386 "name": "spare", 00:13:21.386 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:21.386 "is_configured": true, 00:13:21.386 "data_offset": 0, 00:13:21.387 "data_size": 65536 00:13:21.387 }, 00:13:21.387 { 00:13:21.387 "name": "BaseBdev2", 00:13:21.387 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:21.387 "is_configured": true, 00:13:21.387 "data_offset": 0, 00:13:21.387 "data_size": 65536 00:13:21.387 } 00:13:21.387 ] 00:13:21.387 }' 00:13:21.387 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.387 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.387 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.646 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.646 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:21.646 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:21.646 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:21.646 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:21.646 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:13:21.646 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.647 "name": "raid_bdev1", 00:13:21.647 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:21.647 "strip_size_kb": 0, 00:13:21.647 "state": "online", 00:13:21.647 "raid_level": "raid1", 00:13:21.647 "superblock": false, 00:13:21.647 "num_base_bdevs": 2, 00:13:21.647 "num_base_bdevs_discovered": 2, 00:13:21.647 "num_base_bdevs_operational": 2, 00:13:21.647 "process": { 00:13:21.647 "type": "rebuild", 00:13:21.647 "target": "spare", 00:13:21.647 "progress": { 00:13:21.647 "blocks": 10240, 00:13:21.647 "percent": 15 00:13:21.647 } 00:13:21.647 }, 00:13:21.647 "base_bdevs_list": [ 00:13:21.647 { 00:13:21.647 "name": "spare", 00:13:21.647 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:21.647 "is_configured": true, 00:13:21.647 "data_offset": 0, 00:13:21.647 "data_size": 65536 00:13:21.647 }, 00:13:21.647 { 00:13:21.647 "name": "BaseBdev2", 00:13:21.647 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:21.647 "is_configured": true, 00:13:21.647 "data_offset": 0, 00:13:21.647 "data_size": 65536 00:13:21.647 } 00:13:21.647 ] 00:13:21.647 }' 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.647 16:09:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.647 [2024-12-12 16:09:47.944284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:21.906 [2024-12-12 16:09:48.186438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:21.907 [2024-12-12 16:09:48.186868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:22.425 149.00 IOPS, 447.00 MiB/s [2024-12-12T16:09:48.777Z] [2024-12-12 16:09:48.521574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.685 "name": "raid_bdev1", 00:13:22.685 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:22.685 "strip_size_kb": 0, 00:13:22.685 "state": "online", 00:13:22.685 "raid_level": "raid1", 00:13:22.685 "superblock": false, 00:13:22.685 "num_base_bdevs": 2, 00:13:22.685 "num_base_bdevs_discovered": 2, 00:13:22.685 "num_base_bdevs_operational": 2, 00:13:22.685 "process": { 00:13:22.685 "type": "rebuild", 00:13:22.685 "target": "spare", 00:13:22.685 "progress": { 00:13:22.685 "blocks": 26624, 00:13:22.685 "percent": 40 00:13:22.685 } 00:13:22.685 }, 00:13:22.685 "base_bdevs_list": [ 00:13:22.685 { 00:13:22.685 "name": "spare", 00:13:22.685 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:22.685 "is_configured": true, 00:13:22.685 "data_offset": 0, 00:13:22.685 "data_size": 65536 00:13:22.685 }, 00:13:22.685 { 00:13:22.685 "name": "BaseBdev2", 00:13:22.685 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:22.685 "is_configured": true, 00:13:22.685 "data_offset": 0, 00:13:22.685 "data_size": 65536 00:13:22.685 } 00:13:22.685 ] 00:13:22.685 }' 00:13:22.685 16:09:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.685 [2024-12-12 16:09:48.973875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:22.685 16:09:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.685 16:09:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.685 16:09:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.685 16:09:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.945 [2024-12-12 16:09:49.230711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:23.464 131.40 IOPS, 394.20 MiB/s [2024-12-12T16:09:49.816Z] [2024-12-12 16:09:49.669016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:23.464 [2024-12-12 16:09:49.669608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:23.724 [2024-12-12 16:09:50.005547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.724 16:09:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.984 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.984 "name": "raid_bdev1", 00:13:23.984 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:23.984 "strip_size_kb": 0, 00:13:23.984 "state": "online", 00:13:23.984 "raid_level": "raid1", 00:13:23.984 "superblock": false, 00:13:23.984 "num_base_bdevs": 2, 00:13:23.984 "num_base_bdevs_discovered": 2, 00:13:23.984 "num_base_bdevs_operational": 2, 00:13:23.984 "process": { 00:13:23.984 "type": "rebuild", 00:13:23.984 "target": "spare", 00:13:23.984 "progress": { 00:13:23.984 "blocks": 45056, 00:13:23.984 "percent": 68 00:13:23.984 } 00:13:23.984 }, 00:13:23.984 "base_bdevs_list": [ 00:13:23.984 { 00:13:23.984 "name": "spare", 00:13:23.984 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:23.984 "is_configured": true, 00:13:23.984 "data_offset": 0, 00:13:23.984 "data_size": 65536 00:13:23.984 }, 00:13:23.984 { 00:13:23.984 "name": "BaseBdev2", 00:13:23.984 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:23.984 "is_configured": true, 00:13:23.984 "data_offset": 0, 00:13:23.984 "data_size": 65536 00:13:23.984 } 00:13:23.984 ] 00:13:23.984 }' 00:13:23.984 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.984 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.984 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.984 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.984 16:09:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.984 [2024-12-12 16:09:50.231686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:24.243 118.00 IOPS, 354.00 MiB/s [2024-12-12T16:09:50.595Z] [2024-12-12 16:09:50.456247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:24.243 [2024-12-12 16:09:50.574021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.180 "name": "raid_bdev1", 00:13:25.180 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:25.180 "strip_size_kb": 0, 00:13:25.180 "state": "online", 00:13:25.180 "raid_level": "raid1", 00:13:25.180 "superblock": false, 00:13:25.180 "num_base_bdevs": 2, 00:13:25.180 "num_base_bdevs_discovered": 2, 00:13:25.180 "num_base_bdevs_operational": 2, 00:13:25.180 "process": { 00:13:25.180 "type": "rebuild", 00:13:25.180 "target": "spare", 00:13:25.180 "progress": { 00:13:25.180 "blocks": 61440, 00:13:25.180 "percent": 93 00:13:25.180 } 00:13:25.180 }, 00:13:25.180 "base_bdevs_list": [ 00:13:25.180 { 00:13:25.180 "name": "spare", 00:13:25.180 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:25.180 "is_configured": true, 00:13:25.180 "data_offset": 0, 00:13:25.180 "data_size": 65536 00:13:25.180 }, 00:13:25.180 { 00:13:25.180 "name": "BaseBdev2", 00:13:25.180 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:25.180 "is_configured": true, 00:13:25.180 "data_offset": 0, 00:13:25.180 "data_size": 65536 00:13:25.180 } 00:13:25.180 ] 00:13:25.180 }' 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.180 108.71 IOPS, 326.14 MiB/s [2024-12-12T16:09:51.532Z] 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.180 16:09:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.180 [2024-12-12 16:09:51.325362] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:25.180 [2024-12-12 16:09:51.430511] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:25.180 [2024-12-12 16:09:51.435460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.118 100.00 IOPS, 300.00 MiB/s [2024-12-12T16:09:52.470Z] 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.118 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.118 "name": "raid_bdev1", 00:13:26.118 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:26.118 "strip_size_kb": 0, 00:13:26.118 "state": "online", 00:13:26.118 "raid_level": "raid1", 00:13:26.118 "superblock": false, 00:13:26.118 "num_base_bdevs": 2, 00:13:26.118 "num_base_bdevs_discovered": 2, 00:13:26.118 "num_base_bdevs_operational": 2, 00:13:26.118 "base_bdevs_list": [ 00:13:26.118 { 00:13:26.118 "name": "spare", 00:13:26.118 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:26.118 "is_configured": true, 00:13:26.118 "data_offset": 0, 00:13:26.118 "data_size": 65536 00:13:26.118 }, 00:13:26.118 { 00:13:26.118 "name": "BaseBdev2", 00:13:26.118 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:26.118 "is_configured": true, 00:13:26.118 "data_offset": 0, 00:13:26.118 "data_size": 65536 00:13:26.118 } 00:13:26.119 ] 00:13:26.119 }' 00:13:26.119 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.119 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:26.119 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.377 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.378 "name": "raid_bdev1", 00:13:26.378 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:26.378 "strip_size_kb": 0, 00:13:26.378 "state": "online", 00:13:26.378 "raid_level": "raid1", 00:13:26.378 "superblock": false, 00:13:26.378 "num_base_bdevs": 2, 00:13:26.378 "num_base_bdevs_discovered": 2, 00:13:26.378 "num_base_bdevs_operational": 2, 00:13:26.378 "base_bdevs_list": [ 00:13:26.378 { 00:13:26.378 "name": "spare", 00:13:26.378 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:26.378 "is_configured": true, 00:13:26.378 "data_offset": 0, 00:13:26.378 "data_size": 65536 00:13:26.378 }, 00:13:26.378 { 00:13:26.378 "name": "BaseBdev2", 00:13:26.378 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:26.378 "is_configured": true, 00:13:26.378 "data_offset": 0, 00:13:26.378 "data_size": 65536 00:13:26.378 } 00:13:26.378 ] 00:13:26.378 }' 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.378 "name": "raid_bdev1", 00:13:26.378 "uuid": "f3cfceb9-3c99-453c-89cb-760c31e938bc", 00:13:26.378 "strip_size_kb": 0, 00:13:26.378 "state": "online", 00:13:26.378 "raid_level": "raid1", 00:13:26.378 "superblock": false, 00:13:26.378 "num_base_bdevs": 2, 00:13:26.378 "num_base_bdevs_discovered": 2, 00:13:26.378 "num_base_bdevs_operational": 2, 00:13:26.378 "base_bdevs_list": [ 00:13:26.378 { 00:13:26.378 "name": "spare", 00:13:26.378 "uuid": "05a0a22e-ad9e-5e03-95c2-bb0f16301187", 00:13:26.378 "is_configured": true, 00:13:26.378 "data_offset": 0, 00:13:26.378 "data_size": 65536 00:13:26.378 }, 00:13:26.378 { 00:13:26.378 "name": "BaseBdev2", 00:13:26.378 "uuid": "2a6092d2-8009-592a-8093-2ea60cba8839", 00:13:26.378 "is_configured": true, 00:13:26.378 "data_offset": 0, 00:13:26.378 "data_size": 65536 00:13:26.378 } 00:13:26.378 ] 00:13:26.378 }' 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.378 16:09:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.946 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:26.946 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.946 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.946 [2024-12-12 16:09:53.047758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.946 [2024-12-12 16:09:53.047814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.946 00:13:26.946 Latency(us) 00:13:26.946 [2024-12-12T16:09:53.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.946 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:26.946 raid_bdev1 : 8.77 94.63 283.89 0.00 0.00 14762.66 318.38 114473.36 00:13:26.946 [2024-12-12T16:09:53.299Z] =================================================================================================================== 00:13:26.947 [2024-12-12T16:09:53.299Z] Total : 94.63 283.89 0.00 0.00 14762.66 318.38 114473.36 00:13:26.947 { 00:13:26.947 "results": [ 00:13:26.947 { 00:13:26.947 "job": "raid_bdev1", 00:13:26.947 "core_mask": "0x1", 00:13:26.947 "workload": "randrw", 00:13:26.947 "percentage": 50, 00:13:26.947 "status": "finished", 00:13:26.947 "queue_depth": 2, 00:13:26.947 "io_size": 3145728, 00:13:26.947 "runtime": 8.770932, 00:13:26.947 "iops": 94.63076443871643, 00:13:26.947 "mibps": 283.8922933161493, 00:13:26.947 "io_failed": 0, 00:13:26.947 "io_timeout": 0, 00:13:26.947 "avg_latency_us": 14762.656789603829, 00:13:26.947 "min_latency_us": 318.37903930131006, 00:13:26.947 "max_latency_us": 114473.36244541485 00:13:26.947 } 00:13:26.947 ], 00:13:26.947 "core_count": 1 00:13:26.947 } 00:13:26.947 [2024-12-12 16:09:53.085052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.947 [2024-12-12 16:09:53.085130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.947 [2024-12-12 16:09:53.085218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.947 [2024-12-12 16:09:53.085230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.947 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:27.207 /dev/nbd0 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.207 1+0 records in 00:13:27.207 1+0 records out 00:13:27.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228415 s, 17.9 MB/s 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.207 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:27.467 /dev/nbd1 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.467 1+0 records in 00:13:27.467 1+0 records out 00:13:27.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344284 s, 11.9 MB/s 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.467 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:27.726 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:27.727 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.727 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:27.727 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.727 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:27.727 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.727 16:09:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:27.727 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:27.727 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:27.727 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:27.727 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.727 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.727 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:27.986 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78524 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78524 ']' 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78524 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78524 00:13:27.987 killing process with pid 78524 00:13:27.987 Received shutdown signal, test time was about 10.042237 seconds 00:13:27.987 00:13:27.987 Latency(us) 00:13:27.987 [2024-12-12T16:09:54.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.987 [2024-12-12T16:09:54.339Z] =================================================================================================================== 00:13:27.987 [2024-12-12T16:09:54.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78524' 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78524 00:13:27.987 [2024-12-12 16:09:54.332502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.987 16:09:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78524 00:13:28.247 [2024-12-12 16:09:54.585699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:29.638 00:13:29.638 real 0m13.334s 00:13:29.638 user 0m16.340s 00:13:29.638 sys 0m1.602s 00:13:29.638 ************************************ 00:13:29.638 END TEST raid_rebuild_test_io 00:13:29.638 ************************************ 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.638 16:09:55 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:29.638 16:09:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:29.638 16:09:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.638 16:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.638 ************************************ 00:13:29.638 START TEST raid_rebuild_test_sb_io 00:13:29.638 ************************************ 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78920 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78920 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78920 ']' 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.638 16:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.898 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:29.898 Zero copy mechanism will not be used. 00:13:29.898 [2024-12-12 16:09:56.048704] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:29.898 [2024-12-12 16:09:56.048846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78920 ] 00:13:29.898 [2024-12-12 16:09:56.223313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.157 [2024-12-12 16:09:56.363332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.417 [2024-12-12 16:09:56.600398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.417 [2024-12-12 16:09:56.600500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.677 BaseBdev1_malloc 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.677 [2024-12-12 16:09:56.937211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:30.677 [2024-12-12 16:09:56.937306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.677 [2024-12-12 16:09:56.937339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:30.677 [2024-12-12 16:09:56.937355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.677 [2024-12-12 16:09:56.939874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.677 [2024-12-12 16:09:56.939939] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.677 BaseBdev1 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.677 BaseBdev2_malloc 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.677 16:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.677 [2024-12-12 16:09:56.999074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:30.677 [2024-12-12 16:09:56.999166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.677 [2024-12-12 16:09:56.999196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:30.677 [2024-12-12 16:09:56.999212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.677 [2024-12-12 16:09:57.001743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.677 [2024-12-12 16:09:57.001794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:30.677 BaseBdev2 00:13:30.677 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.677 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:30.677 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.677 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.937 spare_malloc 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.937 spare_delay 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.937 [2024-12-12 16:09:57.086059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:30.937 [2024-12-12 16:09:57.086155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.937 [2024-12-12 16:09:57.086186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:30.937 [2024-12-12 16:09:57.086203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.937 [2024-12-12 16:09:57.088734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.937 [2024-12-12 16:09:57.088873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:30.937 spare 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.937 [2024-12-12 16:09:57.098186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.937 [2024-12-12 16:09:57.100458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.937 [2024-12-12 16:09:57.100691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:30.937 [2024-12-12 16:09:57.100709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.937 [2024-12-12 16:09:57.101066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:30.937 [2024-12-12 16:09:57.101291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:30.937 [2024-12-12 16:09:57.101302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:30.937 [2024-12-12 16:09:57.101519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.937 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.937 "name": "raid_bdev1", 00:13:30.938 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:30.938 "strip_size_kb": 0, 00:13:30.938 "state": "online", 00:13:30.938 "raid_level": "raid1", 00:13:30.938 "superblock": true, 00:13:30.938 "num_base_bdevs": 2, 00:13:30.938 "num_base_bdevs_discovered": 2, 00:13:30.938 "num_base_bdevs_operational": 2, 00:13:30.938 "base_bdevs_list": [ 00:13:30.938 { 00:13:30.938 "name": "BaseBdev1", 00:13:30.938 "uuid": "cc5cf087-0996-5e7e-a045-006b6c3346d2", 00:13:30.938 "is_configured": true, 00:13:30.938 "data_offset": 2048, 00:13:30.938 "data_size": 63488 00:13:30.938 }, 00:13:30.938 { 00:13:30.938 "name": "BaseBdev2", 00:13:30.938 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:30.938 "is_configured": true, 00:13:30.938 "data_offset": 2048, 00:13:30.938 "data_size": 63488 00:13:30.938 } 00:13:30.938 ] 00:13:30.938 }' 00:13:30.938 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.938 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.505 [2024-12-12 16:09:57.585574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.505 [2024-12-12 16:09:57.685087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.505 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.506 "name": "raid_bdev1", 00:13:31.506 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:31.506 "strip_size_kb": 0, 00:13:31.506 "state": "online", 00:13:31.506 "raid_level": "raid1", 00:13:31.506 "superblock": true, 00:13:31.506 "num_base_bdevs": 2, 00:13:31.506 "num_base_bdevs_discovered": 1, 00:13:31.506 "num_base_bdevs_operational": 1, 00:13:31.506 "base_bdevs_list": [ 00:13:31.506 { 00:13:31.506 "name": null, 00:13:31.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.506 "is_configured": false, 00:13:31.506 "data_offset": 0, 00:13:31.506 "data_size": 63488 00:13:31.506 }, 00:13:31.506 { 00:13:31.506 "name": "BaseBdev2", 00:13:31.506 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:31.506 "is_configured": true, 00:13:31.506 "data_offset": 2048, 00:13:31.506 "data_size": 63488 00:13:31.506 } 00:13:31.506 ] 00:13:31.506 }' 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.506 16:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.506 [2024-12-12 16:09:57.762837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:31.506 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:31.506 Zero copy mechanism will not be used. 00:13:31.506 Running I/O for 60 seconds... 00:13:31.764 16:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.764 16:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.764 16:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.023 [2024-12-12 16:09:58.115428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.023 16:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.023 16:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:32.023 [2024-12-12 16:09:58.186113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:32.023 [2024-12-12 16:09:58.188354] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.023 [2024-12-12 16:09:58.302713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.023 [2024-12-12 16:09:58.303487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.281 [2024-12-12 16:09:58.530106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.798 174.00 IOPS, 522.00 MiB/s [2024-12-12T16:09:59.150Z] [2024-12-12 16:09:58.911722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.798 [2024-12-12 16:09:58.912183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.059 "name": "raid_bdev1", 00:13:33.059 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:33.059 "strip_size_kb": 0, 00:13:33.059 "state": "online", 00:13:33.059 "raid_level": "raid1", 00:13:33.059 "superblock": true, 00:13:33.059 "num_base_bdevs": 2, 00:13:33.059 "num_base_bdevs_discovered": 2, 00:13:33.059 "num_base_bdevs_operational": 2, 00:13:33.059 "process": { 00:13:33.059 "type": "rebuild", 00:13:33.059 "target": "spare", 00:13:33.059 "progress": { 00:13:33.059 "blocks": 12288, 00:13:33.059 "percent": 19 00:13:33.059 } 00:13:33.059 }, 00:13:33.059 "base_bdevs_list": [ 00:13:33.059 { 00:13:33.059 "name": "spare", 00:13:33.059 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:33.059 "is_configured": true, 00:13:33.059 "data_offset": 2048, 00:13:33.059 "data_size": 63488 00:13:33.059 }, 00:13:33.059 { 00:13:33.059 "name": "BaseBdev2", 00:13:33.059 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:33.059 "is_configured": true, 00:13:33.059 "data_offset": 2048, 00:13:33.059 "data_size": 63488 00:13:33.059 } 00:13:33.059 ] 00:13:33.059 }' 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.059 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.059 [2024-12-12 16:09:59.308353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.059 [2024-12-12 16:09:59.382064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:33.319 [2024-12-12 16:09:59.487467] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.319 [2024-12-12 16:09:59.497678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.319 [2024-12-12 16:09:59.497833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.319 [2024-12-12 16:09:59.497862] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.319 [2024-12-12 16:09:59.538188] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.319 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.319 "name": "raid_bdev1", 00:13:33.319 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:33.319 "strip_size_kb": 0, 00:13:33.319 "state": "online", 00:13:33.320 "raid_level": "raid1", 00:13:33.320 "superblock": true, 00:13:33.320 "num_base_bdevs": 2, 00:13:33.320 "num_base_bdevs_discovered": 1, 00:13:33.320 "num_base_bdevs_operational": 1, 00:13:33.320 "base_bdevs_list": [ 00:13:33.320 { 00:13:33.320 "name": null, 00:13:33.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.320 "is_configured": false, 00:13:33.320 "data_offset": 0, 00:13:33.320 "data_size": 63488 00:13:33.320 }, 00:13:33.320 { 00:13:33.320 "name": "BaseBdev2", 00:13:33.320 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:33.320 "is_configured": true, 00:13:33.320 "data_offset": 2048, 00:13:33.320 "data_size": 63488 00:13:33.320 } 00:13:33.320 ] 00:13:33.320 }' 00:13:33.320 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.320 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.839 160.00 IOPS, 480.00 MiB/s [2024-12-12T16:10:00.191Z] 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.839 16:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.839 "name": "raid_bdev1", 00:13:33.839 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:33.839 "strip_size_kb": 0, 00:13:33.839 "state": "online", 00:13:33.839 "raid_level": "raid1", 00:13:33.839 "superblock": true, 00:13:33.839 "num_base_bdevs": 2, 00:13:33.839 "num_base_bdevs_discovered": 1, 00:13:33.839 "num_base_bdevs_operational": 1, 00:13:33.839 "base_bdevs_list": [ 00:13:33.839 { 00:13:33.839 "name": null, 00:13:33.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.839 "is_configured": false, 00:13:33.839 "data_offset": 0, 00:13:33.839 "data_size": 63488 00:13:33.839 }, 00:13:33.839 { 00:13:33.839 "name": "BaseBdev2", 00:13:33.839 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:33.839 "is_configured": true, 00:13:33.839 "data_offset": 2048, 00:13:33.839 "data_size": 63488 00:13:33.839 } 00:13:33.839 ] 00:13:33.839 }' 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.839 [2024-12-12 16:10:00.126819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.839 16:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:34.098 [2024-12-12 16:10:00.190559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:34.098 [2024-12-12 16:10:00.192947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.098 [2024-12-12 16:10:00.325212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:34.098 [2024-12-12 16:10:00.326165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:34.357 [2024-12-12 16:10:00.542747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:34.357 [2024-12-12 16:10:00.543363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:34.877 159.00 IOPS, 477.00 MiB/s [2024-12-12T16:10:01.229Z] [2024-12-12 16:10:01.035131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:34.877 [2024-12-12 16:10:01.035851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.877 "name": "raid_bdev1", 00:13:34.877 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:34.877 "strip_size_kb": 0, 00:13:34.877 "state": "online", 00:13:34.877 "raid_level": "raid1", 00:13:34.877 "superblock": true, 00:13:34.877 "num_base_bdevs": 2, 00:13:34.877 "num_base_bdevs_discovered": 2, 00:13:34.877 "num_base_bdevs_operational": 2, 00:13:34.877 "process": { 00:13:34.877 "type": "rebuild", 00:13:34.877 "target": "spare", 00:13:34.877 "progress": { 00:13:34.877 "blocks": 10240, 00:13:34.877 "percent": 16 00:13:34.877 } 00:13:34.877 }, 00:13:34.877 "base_bdevs_list": [ 00:13:34.877 { 00:13:34.877 "name": "spare", 00:13:34.877 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:34.877 "is_configured": true, 00:13:34.877 "data_offset": 2048, 00:13:34.877 "data_size": 63488 00:13:34.877 }, 00:13:34.877 { 00:13:34.877 "name": "BaseBdev2", 00:13:34.877 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:34.877 "is_configured": true, 00:13:34.877 "data_offset": 2048, 00:13:34.877 "data_size": 63488 00:13:34.877 } 00:13:34.877 ] 00:13:34.877 }' 00:13:34.877 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:35.137 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=429 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.137 "name": "raid_bdev1", 00:13:35.137 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:35.137 "strip_size_kb": 0, 00:13:35.137 "state": "online", 00:13:35.137 "raid_level": "raid1", 00:13:35.137 "superblock": true, 00:13:35.137 "num_base_bdevs": 2, 00:13:35.137 "num_base_bdevs_discovered": 2, 00:13:35.137 "num_base_bdevs_operational": 2, 00:13:35.137 "process": { 00:13:35.137 "type": "rebuild", 00:13:35.137 "target": "spare", 00:13:35.137 "progress": { 00:13:35.137 "blocks": 12288, 00:13:35.137 "percent": 19 00:13:35.137 } 00:13:35.137 }, 00:13:35.137 "base_bdevs_list": [ 00:13:35.137 { 00:13:35.137 "name": "spare", 00:13:35.137 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:35.137 "is_configured": true, 00:13:35.137 "data_offset": 2048, 00:13:35.137 "data_size": 63488 00:13:35.137 }, 00:13:35.137 { 00:13:35.137 "name": "BaseBdev2", 00:13:35.137 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:35.137 "is_configured": true, 00:13:35.137 "data_offset": 2048, 00:13:35.137 "data_size": 63488 00:13:35.137 } 00:13:35.137 ] 00:13:35.137 }' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.137 [2024-12-12 16:10:01.395861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.137 16:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.396 [2024-12-12 16:10:01.615570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:35.396 [2024-12-12 16:10:01.616033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:35.915 146.00 IOPS, 438.00 MiB/s [2024-12-12T16:10:02.267Z] [2024-12-12 16:10:02.075946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:36.175 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.176 [2024-12-12 16:10:02.446429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.176 "name": "raid_bdev1", 00:13:36.176 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:36.176 "strip_size_kb": 0, 00:13:36.176 "state": "online", 00:13:36.176 "raid_level": "raid1", 00:13:36.176 "superblock": true, 00:13:36.176 "num_base_bdevs": 2, 00:13:36.176 "num_base_bdevs_discovered": 2, 00:13:36.176 "num_base_bdevs_operational": 2, 00:13:36.176 "process": { 00:13:36.176 "type": "rebuild", 00:13:36.176 "target": "spare", 00:13:36.176 "progress": { 00:13:36.176 "blocks": 26624, 00:13:36.176 "percent": 41 00:13:36.176 } 00:13:36.176 }, 00:13:36.176 "base_bdevs_list": [ 00:13:36.176 { 00:13:36.176 "name": "spare", 00:13:36.176 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:36.176 "is_configured": true, 00:13:36.176 "data_offset": 2048, 00:13:36.176 "data_size": 63488 00:13:36.176 }, 00:13:36.176 { 00:13:36.176 "name": "BaseBdev2", 00:13:36.176 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:36.176 "is_configured": true, 00:13:36.176 "data_offset": 2048, 00:13:36.176 "data_size": 63488 00:13:36.176 } 00:13:36.176 ] 00:13:36.176 }' 00:13:36.176 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.435 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.435 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.435 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.435 16:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:36.694 125.20 IOPS, 375.60 MiB/s [2024-12-12T16:10:03.046Z] [2024-12-12 16:10:02.797754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:36.694 [2024-12-12 16:10:02.925142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:37.263 [2024-12-12 16:10:03.356002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.263 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.521 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.521 "name": "raid_bdev1", 00:13:37.521 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:37.521 "strip_size_kb": 0, 00:13:37.521 "state": "online", 00:13:37.521 "raid_level": "raid1", 00:13:37.521 "superblock": true, 00:13:37.521 "num_base_bdevs": 2, 00:13:37.521 "num_base_bdevs_discovered": 2, 00:13:37.521 "num_base_bdevs_operational": 2, 00:13:37.521 "process": { 00:13:37.521 "type": "rebuild", 00:13:37.521 "target": "spare", 00:13:37.521 "progress": { 00:13:37.521 "blocks": 43008, 00:13:37.521 "percent": 67 00:13:37.521 } 00:13:37.521 }, 00:13:37.521 "base_bdevs_list": [ 00:13:37.521 { 00:13:37.521 "name": "spare", 00:13:37.521 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:37.521 "is_configured": true, 00:13:37.521 "data_offset": 2048, 00:13:37.521 "data_size": 63488 00:13:37.521 }, 00:13:37.521 { 00:13:37.521 "name": "BaseBdev2", 00:13:37.521 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:37.521 "is_configured": true, 00:13:37.521 "data_offset": 2048, 00:13:37.521 "data_size": 63488 00:13:37.521 } 00:13:37.521 ] 00:13:37.521 }' 00:13:37.521 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.521 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.521 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.521 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.521 16:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.780 110.67 IOPS, 332.00 MiB/s [2024-12-12T16:10:04.132Z] [2024-12-12 16:10:04.025569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:37.780 [2024-12-12 16:10:04.129655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:38.353 [2024-12-12 16:10:04.482283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.613 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.613 100.86 IOPS, 302.57 MiB/s [2024-12-12T16:10:04.965Z] 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.613 "name": "raid_bdev1", 00:13:38.613 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:38.613 "strip_size_kb": 0, 00:13:38.613 "state": "online", 00:13:38.613 "raid_level": "raid1", 00:13:38.613 "superblock": true, 00:13:38.613 "num_base_bdevs": 2, 00:13:38.613 "num_base_bdevs_discovered": 2, 00:13:38.613 "num_base_bdevs_operational": 2, 00:13:38.613 "process": { 00:13:38.613 "type": "rebuild", 00:13:38.613 "target": "spare", 00:13:38.613 "progress": { 00:13:38.613 "blocks": 61440, 00:13:38.613 "percent": 96 00:13:38.613 } 00:13:38.613 }, 00:13:38.613 "base_bdevs_list": [ 00:13:38.613 { 00:13:38.613 "name": "spare", 00:13:38.613 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:38.613 "is_configured": true, 00:13:38.613 "data_offset": 2048, 00:13:38.613 "data_size": 63488 00:13:38.613 }, 00:13:38.613 { 00:13:38.613 "name": "BaseBdev2", 00:13:38.614 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:38.614 "is_configured": true, 00:13:38.614 "data_offset": 2048, 00:13:38.614 "data_size": 63488 00:13:38.614 } 00:13:38.614 ] 00:13:38.614 }' 00:13:38.614 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.614 [2024-12-12 16:10:04.813865] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:38.614 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.614 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.614 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.614 16:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.614 [2024-12-12 16:10:04.919154] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:38.614 [2024-12-12 16:10:04.923780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.552 92.50 IOPS, 277.50 MiB/s [2024-12-12T16:10:05.904Z] 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.552 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.812 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.812 "name": "raid_bdev1", 00:13:39.812 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:39.812 "strip_size_kb": 0, 00:13:39.812 "state": "online", 00:13:39.812 "raid_level": "raid1", 00:13:39.812 "superblock": true, 00:13:39.812 "num_base_bdevs": 2, 00:13:39.812 "num_base_bdevs_discovered": 2, 00:13:39.812 "num_base_bdevs_operational": 2, 00:13:39.812 "base_bdevs_list": [ 00:13:39.812 { 00:13:39.812 "name": "spare", 00:13:39.812 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:39.812 "is_configured": true, 00:13:39.812 "data_offset": 2048, 00:13:39.812 "data_size": 63488 00:13:39.812 }, 00:13:39.812 { 00:13:39.812 "name": "BaseBdev2", 00:13:39.812 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:39.812 "is_configured": true, 00:13:39.812 "data_offset": 2048, 00:13:39.812 "data_size": 63488 00:13:39.812 } 00:13:39.812 ] 00:13:39.812 }' 00:13:39.812 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.812 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:39.812 16:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.812 "name": "raid_bdev1", 00:13:39.812 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:39.812 "strip_size_kb": 0, 00:13:39.812 "state": "online", 00:13:39.812 "raid_level": "raid1", 00:13:39.812 "superblock": true, 00:13:39.812 "num_base_bdevs": 2, 00:13:39.812 "num_base_bdevs_discovered": 2, 00:13:39.812 "num_base_bdevs_operational": 2, 00:13:39.812 "base_bdevs_list": [ 00:13:39.812 { 00:13:39.812 "name": "spare", 00:13:39.812 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:39.812 "is_configured": true, 00:13:39.812 "data_offset": 2048, 00:13:39.812 "data_size": 63488 00:13:39.812 }, 00:13:39.812 { 00:13:39.812 "name": "BaseBdev2", 00:13:39.812 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:39.812 "is_configured": true, 00:13:39.812 "data_offset": 2048, 00:13:39.812 "data_size": 63488 00:13:39.812 } 00:13:39.812 ] 00:13:39.812 }' 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.812 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.072 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.072 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.072 "name": "raid_bdev1", 00:13:40.072 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:40.072 "strip_size_kb": 0, 00:13:40.072 "state": "online", 00:13:40.072 "raid_level": "raid1", 00:13:40.072 "superblock": true, 00:13:40.072 "num_base_bdevs": 2, 00:13:40.072 "num_base_bdevs_discovered": 2, 00:13:40.072 "num_base_bdevs_operational": 2, 00:13:40.072 "base_bdevs_list": [ 00:13:40.072 { 00:13:40.072 "name": "spare", 00:13:40.072 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:40.072 "is_configured": true, 00:13:40.072 "data_offset": 2048, 00:13:40.072 "data_size": 63488 00:13:40.072 }, 00:13:40.072 { 00:13:40.072 "name": "BaseBdev2", 00:13:40.072 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:40.072 "is_configured": true, 00:13:40.072 "data_offset": 2048, 00:13:40.072 "data_size": 63488 00:13:40.072 } 00:13:40.072 ] 00:13:40.072 }' 00:13:40.072 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.072 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.332 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.332 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.332 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.332 [2024-12-12 16:10:06.570827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.332 [2024-12-12 16:10:06.571027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.332 00:13:40.332 Latency(us) 00:13:40.332 [2024-12-12T16:10:06.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.332 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:40.332 raid_bdev1 : 8.92 87.64 262.93 0.00 0.00 16112.03 298.70 111268.11 00:13:40.332 [2024-12-12T16:10:06.684Z] =================================================================================================================== 00:13:40.332 [2024-12-12T16:10:06.684Z] Total : 87.64 262.93 0.00 0.00 16112.03 298.70 111268.11 00:13:40.591 [2024-12-12 16:10:06.693360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.591 [2024-12-12 16:10:06.693528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.591 [2024-12-12 16:10:06.693647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.591 [2024-12-12 16:10:06.693720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:40.591 { 00:13:40.591 "results": [ 00:13:40.591 { 00:13:40.591 "job": "raid_bdev1", 00:13:40.591 "core_mask": "0x1", 00:13:40.591 "workload": "randrw", 00:13:40.591 "percentage": 50, 00:13:40.591 "status": "finished", 00:13:40.592 "queue_depth": 2, 00:13:40.592 "io_size": 3145728, 00:13:40.592 "runtime": 8.922467, 00:13:40.592 "iops": 87.64392179875813, 00:13:40.592 "mibps": 262.9317653962744, 00:13:40.592 "io_failed": 0, 00:13:40.592 "io_timeout": 0, 00:13:40.592 "avg_latency_us": 16112.03198606194, 00:13:40.592 "min_latency_us": 298.70393013100437, 00:13:40.592 "max_latency_us": 111268.10829694323 00:13:40.592 } 00:13:40.592 ], 00:13:40.592 "core_count": 1 00:13:40.592 } 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.592 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:40.851 /dev/nbd0 00:13:40.851 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.851 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.851 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:40.851 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:40.851 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:40.851 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:40.851 16:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:40.851 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:40.851 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.852 1+0 records in 00:13:40.852 1+0 records out 00:13:40.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406648 s, 10.1 MB/s 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.852 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:41.112 /dev/nbd1 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.112 1+0 records in 00:13:41.112 1+0 records out 00:13:41.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532959 s, 7.7 MB/s 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.112 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.372 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.631 [2024-12-12 16:10:07.949939] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.631 [2024-12-12 16:10:07.950020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.631 [2024-12-12 16:10:07.950050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:41.631 [2024-12-12 16:10:07.950065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.631 [2024-12-12 16:10:07.952648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.631 [2024-12-12 16:10:07.952699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.631 [2024-12-12 16:10:07.952808] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:41.631 [2024-12-12 16:10:07.952873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.631 [2024-12-12 16:10:07.953054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.631 spare 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.631 16:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.891 [2024-12-12 16:10:08.053016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:41.891 [2024-12-12 16:10:08.053217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.891 [2024-12-12 16:10:08.053737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:41.891 [2024-12-12 16:10:08.054091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:41.891 [2024-12-12 16:10:08.054153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:41.891 [2024-12-12 16:10:08.054504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.891 "name": "raid_bdev1", 00:13:41.891 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:41.891 "strip_size_kb": 0, 00:13:41.891 "state": "online", 00:13:41.891 "raid_level": "raid1", 00:13:41.891 "superblock": true, 00:13:41.891 "num_base_bdevs": 2, 00:13:41.891 "num_base_bdevs_discovered": 2, 00:13:41.891 "num_base_bdevs_operational": 2, 00:13:41.891 "base_bdevs_list": [ 00:13:41.891 { 00:13:41.891 "name": "spare", 00:13:41.891 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:41.891 "is_configured": true, 00:13:41.891 "data_offset": 2048, 00:13:41.891 "data_size": 63488 00:13:41.891 }, 00:13:41.891 { 00:13:41.891 "name": "BaseBdev2", 00:13:41.891 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:41.891 "is_configured": true, 00:13:41.891 "data_offset": 2048, 00:13:41.891 "data_size": 63488 00:13:41.891 } 00:13:41.891 ] 00:13:41.891 }' 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.891 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.150 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.151 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.151 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.151 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.151 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.411 "name": "raid_bdev1", 00:13:42.411 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:42.411 "strip_size_kb": 0, 00:13:42.411 "state": "online", 00:13:42.411 "raid_level": "raid1", 00:13:42.411 "superblock": true, 00:13:42.411 "num_base_bdevs": 2, 00:13:42.411 "num_base_bdevs_discovered": 2, 00:13:42.411 "num_base_bdevs_operational": 2, 00:13:42.411 "base_bdevs_list": [ 00:13:42.411 { 00:13:42.411 "name": "spare", 00:13:42.411 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:42.411 "is_configured": true, 00:13:42.411 "data_offset": 2048, 00:13:42.411 "data_size": 63488 00:13:42.411 }, 00:13:42.411 { 00:13:42.411 "name": "BaseBdev2", 00:13:42.411 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:42.411 "is_configured": true, 00:13:42.411 "data_offset": 2048, 00:13:42.411 "data_size": 63488 00:13:42.411 } 00:13:42.411 ] 00:13:42.411 }' 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.411 [2024-12-12 16:10:08.677547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.411 "name": "raid_bdev1", 00:13:42.411 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:42.411 "strip_size_kb": 0, 00:13:42.411 "state": "online", 00:13:42.411 "raid_level": "raid1", 00:13:42.411 "superblock": true, 00:13:42.411 "num_base_bdevs": 2, 00:13:42.411 "num_base_bdevs_discovered": 1, 00:13:42.411 "num_base_bdevs_operational": 1, 00:13:42.411 "base_bdevs_list": [ 00:13:42.411 { 00:13:42.411 "name": null, 00:13:42.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.411 "is_configured": false, 00:13:42.411 "data_offset": 0, 00:13:42.411 "data_size": 63488 00:13:42.411 }, 00:13:42.411 { 00:13:42.411 "name": "BaseBdev2", 00:13:42.411 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:42.411 "is_configured": true, 00:13:42.411 "data_offset": 2048, 00:13:42.411 "data_size": 63488 00:13:42.411 } 00:13:42.411 ] 00:13:42.411 }' 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.411 16:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.981 16:10:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.981 16:10:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.981 16:10:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.981 [2024-12-12 16:10:09.108875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.981 [2024-12-12 16:10:09.109184] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:42.981 [2024-12-12 16:10:09.109202] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:42.981 [2024-12-12 16:10:09.109262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.981 [2024-12-12 16:10:09.127068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:42.981 16:10:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.981 16:10:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:42.981 [2024-12-12 16:10:09.129309] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.920 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.920 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.921 "name": "raid_bdev1", 00:13:43.921 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:43.921 "strip_size_kb": 0, 00:13:43.921 "state": "online", 00:13:43.921 "raid_level": "raid1", 00:13:43.921 "superblock": true, 00:13:43.921 "num_base_bdevs": 2, 00:13:43.921 "num_base_bdevs_discovered": 2, 00:13:43.921 "num_base_bdevs_operational": 2, 00:13:43.921 "process": { 00:13:43.921 "type": "rebuild", 00:13:43.921 "target": "spare", 00:13:43.921 "progress": { 00:13:43.921 "blocks": 20480, 00:13:43.921 "percent": 32 00:13:43.921 } 00:13:43.921 }, 00:13:43.921 "base_bdevs_list": [ 00:13:43.921 { 00:13:43.921 "name": "spare", 00:13:43.921 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:43.921 "is_configured": true, 00:13:43.921 "data_offset": 2048, 00:13:43.921 "data_size": 63488 00:13:43.921 }, 00:13:43.921 { 00:13:43.921 "name": "BaseBdev2", 00:13:43.921 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:43.921 "is_configured": true, 00:13:43.921 "data_offset": 2048, 00:13:43.921 "data_size": 63488 00:13:43.921 } 00:13:43.921 ] 00:13:43.921 }' 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.921 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.181 [2024-12-12 16:10:10.289034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.181 [2024-12-12 16:10:10.338402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:44.181 [2024-12-12 16:10:10.338473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.181 [2024-12-12 16:10:10.338494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.181 [2024-12-12 16:10:10.338504] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.181 "name": "raid_bdev1", 00:13:44.181 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:44.181 "strip_size_kb": 0, 00:13:44.181 "state": "online", 00:13:44.181 "raid_level": "raid1", 00:13:44.181 "superblock": true, 00:13:44.181 "num_base_bdevs": 2, 00:13:44.181 "num_base_bdevs_discovered": 1, 00:13:44.181 "num_base_bdevs_operational": 1, 00:13:44.181 "base_bdevs_list": [ 00:13:44.181 { 00:13:44.181 "name": null, 00:13:44.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.181 "is_configured": false, 00:13:44.181 "data_offset": 0, 00:13:44.181 "data_size": 63488 00:13:44.181 }, 00:13:44.181 { 00:13:44.181 "name": "BaseBdev2", 00:13:44.181 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:44.181 "is_configured": true, 00:13:44.181 "data_offset": 2048, 00:13:44.181 "data_size": 63488 00:13:44.181 } 00:13:44.181 ] 00:13:44.181 }' 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.181 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.751 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.751 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.751 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.751 [2024-12-12 16:10:10.867710] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.751 [2024-12-12 16:10:10.867889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.751 [2024-12-12 16:10:10.867940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:44.751 [2024-12-12 16:10:10.867954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.751 [2024-12-12 16:10:10.868571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.751 [2024-12-12 16:10:10.868596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.751 [2024-12-12 16:10:10.868714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:44.751 [2024-12-12 16:10:10.868729] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:44.751 [2024-12-12 16:10:10.868747] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.751 [2024-12-12 16:10:10.868787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.751 [2024-12-12 16:10:10.887816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:44.751 spare 00:13:44.751 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.751 16:10:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:44.751 [2024-12-12 16:10:10.890036] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.690 "name": "raid_bdev1", 00:13:45.690 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:45.690 "strip_size_kb": 0, 00:13:45.690 "state": "online", 00:13:45.690 "raid_level": "raid1", 00:13:45.690 "superblock": true, 00:13:45.690 "num_base_bdevs": 2, 00:13:45.690 "num_base_bdevs_discovered": 2, 00:13:45.690 "num_base_bdevs_operational": 2, 00:13:45.690 "process": { 00:13:45.690 "type": "rebuild", 00:13:45.690 "target": "spare", 00:13:45.690 "progress": { 00:13:45.690 "blocks": 20480, 00:13:45.690 "percent": 32 00:13:45.690 } 00:13:45.690 }, 00:13:45.690 "base_bdevs_list": [ 00:13:45.690 { 00:13:45.690 "name": "spare", 00:13:45.690 "uuid": "be4f5cc3-896b-5726-aca3-738fe9ad1bd7", 00:13:45.690 "is_configured": true, 00:13:45.690 "data_offset": 2048, 00:13:45.690 "data_size": 63488 00:13:45.690 }, 00:13:45.690 { 00:13:45.690 "name": "BaseBdev2", 00:13:45.690 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:45.690 "is_configured": true, 00:13:45.690 "data_offset": 2048, 00:13:45.690 "data_size": 63488 00:13:45.690 } 00:13:45.690 ] 00:13:45.690 }' 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.690 16:10:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.690 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.690 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.690 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.690 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.690 [2024-12-12 16:10:12.025822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.992 [2024-12-12 16:10:12.099333] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.992 [2024-12-12 16:10:12.099443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.992 [2024-12-12 16:10:12.099461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.992 [2024-12-12 16:10:12.099474] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.992 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.992 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.992 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.992 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.993 "name": "raid_bdev1", 00:13:45.993 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:45.993 "strip_size_kb": 0, 00:13:45.993 "state": "online", 00:13:45.993 "raid_level": "raid1", 00:13:45.993 "superblock": true, 00:13:45.993 "num_base_bdevs": 2, 00:13:45.993 "num_base_bdevs_discovered": 1, 00:13:45.993 "num_base_bdevs_operational": 1, 00:13:45.993 "base_bdevs_list": [ 00:13:45.993 { 00:13:45.993 "name": null, 00:13:45.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.993 "is_configured": false, 00:13:45.993 "data_offset": 0, 00:13:45.993 "data_size": 63488 00:13:45.993 }, 00:13:45.993 { 00:13:45.993 "name": "BaseBdev2", 00:13:45.993 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:45.993 "is_configured": true, 00:13:45.993 "data_offset": 2048, 00:13:45.993 "data_size": 63488 00:13:45.993 } 00:13:45.993 ] 00:13:45.993 }' 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.993 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.563 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.563 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.563 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.563 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.564 "name": "raid_bdev1", 00:13:46.564 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:46.564 "strip_size_kb": 0, 00:13:46.564 "state": "online", 00:13:46.564 "raid_level": "raid1", 00:13:46.564 "superblock": true, 00:13:46.564 "num_base_bdevs": 2, 00:13:46.564 "num_base_bdevs_discovered": 1, 00:13:46.564 "num_base_bdevs_operational": 1, 00:13:46.564 "base_bdevs_list": [ 00:13:46.564 { 00:13:46.564 "name": null, 00:13:46.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.564 "is_configured": false, 00:13:46.564 "data_offset": 0, 00:13:46.564 "data_size": 63488 00:13:46.564 }, 00:13:46.564 { 00:13:46.564 "name": "BaseBdev2", 00:13:46.564 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:46.564 "is_configured": true, 00:13:46.564 "data_offset": 2048, 00:13:46.564 "data_size": 63488 00:13:46.564 } 00:13:46.564 ] 00:13:46.564 }' 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.564 [2024-12-12 16:10:12.788424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:46.564 [2024-12-12 16:10:12.788511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.564 [2024-12-12 16:10:12.788539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:46.564 [2024-12-12 16:10:12.788554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.564 [2024-12-12 16:10:12.789118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.564 [2024-12-12 16:10:12.789160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.564 [2024-12-12 16:10:12.789252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:46.564 [2024-12-12 16:10:12.789272] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:46.564 [2024-12-12 16:10:12.789282] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:46.564 [2024-12-12 16:10:12.789301] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:46.564 BaseBdev1 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.564 16:10:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.502 "name": "raid_bdev1", 00:13:47.502 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:47.502 "strip_size_kb": 0, 00:13:47.502 "state": "online", 00:13:47.502 "raid_level": "raid1", 00:13:47.502 "superblock": true, 00:13:47.502 "num_base_bdevs": 2, 00:13:47.502 "num_base_bdevs_discovered": 1, 00:13:47.502 "num_base_bdevs_operational": 1, 00:13:47.502 "base_bdevs_list": [ 00:13:47.502 { 00:13:47.502 "name": null, 00:13:47.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.502 "is_configured": false, 00:13:47.502 "data_offset": 0, 00:13:47.502 "data_size": 63488 00:13:47.502 }, 00:13:47.502 { 00:13:47.502 "name": "BaseBdev2", 00:13:47.502 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:47.502 "is_configured": true, 00:13:47.502 "data_offset": 2048, 00:13:47.502 "data_size": 63488 00:13:47.502 } 00:13:47.502 ] 00:13:47.502 }' 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.502 16:10:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.072 "name": "raid_bdev1", 00:13:48.072 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:48.072 "strip_size_kb": 0, 00:13:48.072 "state": "online", 00:13:48.072 "raid_level": "raid1", 00:13:48.072 "superblock": true, 00:13:48.072 "num_base_bdevs": 2, 00:13:48.072 "num_base_bdevs_discovered": 1, 00:13:48.072 "num_base_bdevs_operational": 1, 00:13:48.072 "base_bdevs_list": [ 00:13:48.072 { 00:13:48.072 "name": null, 00:13:48.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.072 "is_configured": false, 00:13:48.072 "data_offset": 0, 00:13:48.072 "data_size": 63488 00:13:48.072 }, 00:13:48.072 { 00:13:48.072 "name": "BaseBdev2", 00:13:48.072 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:48.072 "is_configured": true, 00:13:48.072 "data_offset": 2048, 00:13:48.072 "data_size": 63488 00:13:48.072 } 00:13:48.072 ] 00:13:48.072 }' 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.072 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.331 [2024-12-12 16:10:14.441964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.331 [2024-12-12 16:10:14.442263] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:48.331 [2024-12-12 16:10:14.442331] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:48.331 request: 00:13:48.331 { 00:13:48.331 "base_bdev": "BaseBdev1", 00:13:48.331 "raid_bdev": "raid_bdev1", 00:13:48.331 "method": "bdev_raid_add_base_bdev", 00:13:48.331 "req_id": 1 00:13:48.331 } 00:13:48.331 Got JSON-RPC error response 00:13:48.331 response: 00:13:48.331 { 00:13:48.331 "code": -22, 00:13:48.331 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:48.331 } 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.331 16:10:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.274 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.274 "name": "raid_bdev1", 00:13:49.274 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:49.274 "strip_size_kb": 0, 00:13:49.274 "state": "online", 00:13:49.274 "raid_level": "raid1", 00:13:49.274 "superblock": true, 00:13:49.274 "num_base_bdevs": 2, 00:13:49.274 "num_base_bdevs_discovered": 1, 00:13:49.274 "num_base_bdevs_operational": 1, 00:13:49.274 "base_bdevs_list": [ 00:13:49.274 { 00:13:49.274 "name": null, 00:13:49.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.274 "is_configured": false, 00:13:49.274 "data_offset": 0, 00:13:49.275 "data_size": 63488 00:13:49.275 }, 00:13:49.275 { 00:13:49.275 "name": "BaseBdev2", 00:13:49.275 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:49.275 "is_configured": true, 00:13:49.275 "data_offset": 2048, 00:13:49.275 "data_size": 63488 00:13:49.275 } 00:13:49.275 ] 00:13:49.275 }' 00:13:49.275 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.275 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.842 "name": "raid_bdev1", 00:13:49.842 "uuid": "f96a3718-5861-4b69-aaa7-39880a7ae185", 00:13:49.842 "strip_size_kb": 0, 00:13:49.842 "state": "online", 00:13:49.842 "raid_level": "raid1", 00:13:49.842 "superblock": true, 00:13:49.842 "num_base_bdevs": 2, 00:13:49.842 "num_base_bdevs_discovered": 1, 00:13:49.842 "num_base_bdevs_operational": 1, 00:13:49.842 "base_bdevs_list": [ 00:13:49.842 { 00:13:49.842 "name": null, 00:13:49.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.842 "is_configured": false, 00:13:49.842 "data_offset": 0, 00:13:49.842 "data_size": 63488 00:13:49.842 }, 00:13:49.842 { 00:13:49.842 "name": "BaseBdev2", 00:13:49.842 "uuid": "a05009c4-fe1b-510a-98bb-48428b8603fb", 00:13:49.842 "is_configured": true, 00:13:49.842 "data_offset": 2048, 00:13:49.842 "data_size": 63488 00:13:49.842 } 00:13:49.842 ] 00:13:49.842 }' 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.842 16:10:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78920 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78920 ']' 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78920 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78920 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78920' 00:13:49.842 killing process with pid 78920 00:13:49.842 Received shutdown signal, test time was about 18.341987 seconds 00:13:49.842 00:13:49.842 Latency(us) 00:13:49.842 [2024-12-12T16:10:16.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.842 [2024-12-12T16:10:16.194Z] =================================================================================================================== 00:13:49.842 [2024-12-12T16:10:16.194Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78920 00:13:49.842 [2024-12-12 16:10:16.071963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.842 [2024-12-12 16:10:16.072127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.842 16:10:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78920 00:13:49.842 [2024-12-12 16:10:16.072199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.843 [2024-12-12 16:10:16.072211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:50.101 [2024-12-12 16:10:16.329395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.477 16:10:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:51.477 00:13:51.477 real 0m21.691s 00:13:51.477 user 0m27.810s 00:13:51.477 sys 0m2.435s 00:13:51.477 16:10:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.477 16:10:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.477 ************************************ 00:13:51.477 END TEST raid_rebuild_test_sb_io 00:13:51.477 ************************************ 00:13:51.477 16:10:17 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:51.477 16:10:17 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:51.477 16:10:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:51.477 16:10:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.477 16:10:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.477 ************************************ 00:13:51.477 START TEST raid_rebuild_test 00:13:51.477 ************************************ 00:13:51.477 16:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:51.477 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:51.477 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:51.477 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79633 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79633 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 79633 ']' 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.478 16:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.478 [2024-12-12 16:10:17.808575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:51.478 [2024-12-12 16:10:17.808764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.478 Zero copy mechanism will not be used. 00:13:51.478 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79633 ] 00:13:51.826 [2024-12-12 16:10:17.984785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.826 [2024-12-12 16:10:18.123743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.085 [2024-12-12 16:10:18.366937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.085 [2024-12-12 16:10:18.367165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.343 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.343 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:52.343 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.343 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.343 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.343 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.603 BaseBdev1_malloc 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.603 [2024-12-12 16:10:18.701254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:52.603 [2024-12-12 16:10:18.701346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.603 [2024-12-12 16:10:18.701374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.603 [2024-12-12 16:10:18.701389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.603 [2024-12-12 16:10:18.703865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.603 [2024-12-12 16:10:18.703929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.603 BaseBdev1 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.603 BaseBdev2_malloc 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.603 [2024-12-12 16:10:18.764559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:52.603 [2024-12-12 16:10:18.764652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.603 [2024-12-12 16:10:18.764678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.603 [2024-12-12 16:10:18.764695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.603 [2024-12-12 16:10:18.767216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.603 [2024-12-12 16:10:18.767347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.603 BaseBdev2 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.603 BaseBdev3_malloc 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:52.603 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 [2024-12-12 16:10:18.836798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:52.604 [2024-12-12 16:10:18.836878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.604 [2024-12-12 16:10:18.836916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.604 [2024-12-12 16:10:18.836931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.604 [2024-12-12 16:10:18.839379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.604 [2024-12-12 16:10:18.839523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.604 BaseBdev3 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 BaseBdev4_malloc 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 [2024-12-12 16:10:18.898685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:52.604 [2024-12-12 16:10:18.898783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.604 [2024-12-12 16:10:18.898809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:52.604 [2024-12-12 16:10:18.898823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.604 [2024-12-12 16:10:18.901233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.604 [2024-12-12 16:10:18.901360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:52.604 BaseBdev4 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.864 spare_malloc 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.864 spare_delay 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.864 [2024-12-12 16:10:18.973231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.864 [2024-12-12 16:10:18.973371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.864 [2024-12-12 16:10:18.973394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:52.864 [2024-12-12 16:10:18.973408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.864 [2024-12-12 16:10:18.975780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.864 [2024-12-12 16:10:18.975828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.864 spare 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.864 [2024-12-12 16:10:18.985265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.864 [2024-12-12 16:10:18.987402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.864 [2024-12-12 16:10:18.987480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.864 [2024-12-12 16:10:18.987540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.864 [2024-12-12 16:10:18.987653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.864 [2024-12-12 16:10:18.987672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:52.864 [2024-12-12 16:10:18.987946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:52.864 [2024-12-12 16:10:18.988219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.864 [2024-12-12 16:10:18.988240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.864 [2024-12-12 16:10:18.988393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.864 16:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.864 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.864 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.864 "name": "raid_bdev1", 00:13:52.864 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:13:52.864 "strip_size_kb": 0, 00:13:52.864 "state": "online", 00:13:52.864 "raid_level": "raid1", 00:13:52.864 "superblock": false, 00:13:52.864 "num_base_bdevs": 4, 00:13:52.864 "num_base_bdevs_discovered": 4, 00:13:52.864 "num_base_bdevs_operational": 4, 00:13:52.864 "base_bdevs_list": [ 00:13:52.864 { 00:13:52.864 "name": "BaseBdev1", 00:13:52.864 "uuid": "6c2cb285-1c60-5f88-a58c-12cd62a979b5", 00:13:52.864 "is_configured": true, 00:13:52.864 "data_offset": 0, 00:13:52.864 "data_size": 65536 00:13:52.864 }, 00:13:52.864 { 00:13:52.864 "name": "BaseBdev2", 00:13:52.864 "uuid": "f3b0bc52-4c3f-5f9d-8b34-a96110e50d2e", 00:13:52.864 "is_configured": true, 00:13:52.864 "data_offset": 0, 00:13:52.864 "data_size": 65536 00:13:52.864 }, 00:13:52.864 { 00:13:52.864 "name": "BaseBdev3", 00:13:52.864 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:13:52.864 "is_configured": true, 00:13:52.864 "data_offset": 0, 00:13:52.864 "data_size": 65536 00:13:52.864 }, 00:13:52.864 { 00:13:52.864 "name": "BaseBdev4", 00:13:52.864 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:13:52.864 "is_configured": true, 00:13:52.864 "data_offset": 0, 00:13:52.864 "data_size": 65536 00:13:52.864 } 00:13:52.864 ] 00:13:52.864 }' 00:13:52.864 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.864 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.123 [2024-12-12 16:10:19.432984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:53.123 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.388 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:53.388 [2024-12-12 16:10:19.712195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:53.388 /dev/nbd0 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.658 1+0 records in 00:13:53.658 1+0 records out 00:13:53.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292919 s, 14.0 MB/s 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:53.658 16:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:00.227 65536+0 records in 00:14:00.227 65536+0 records out 00:14:00.227 33554432 bytes (34 MB, 32 MiB) copied, 5.98722 s, 5.6 MB/s 00:14:00.227 16:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:00.227 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.227 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:00.227 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.227 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.228 [2024-12-12 16:10:25.976055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.228 16:10:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.228 [2024-12-12 16:10:26.012096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.228 "name": "raid_bdev1", 00:14:00.228 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:00.228 "strip_size_kb": 0, 00:14:00.228 "state": "online", 00:14:00.228 "raid_level": "raid1", 00:14:00.228 "superblock": false, 00:14:00.228 "num_base_bdevs": 4, 00:14:00.228 "num_base_bdevs_discovered": 3, 00:14:00.228 "num_base_bdevs_operational": 3, 00:14:00.228 "base_bdevs_list": [ 00:14:00.228 { 00:14:00.228 "name": null, 00:14:00.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.228 "is_configured": false, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "name": "BaseBdev2", 00:14:00.228 "uuid": "f3b0bc52-4c3f-5f9d-8b34-a96110e50d2e", 00:14:00.228 "is_configured": true, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "name": "BaseBdev3", 00:14:00.228 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:00.228 "is_configured": true, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "name": "BaseBdev4", 00:14:00.228 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:00.228 "is_configured": true, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 } 00:14:00.228 ] 00:14:00.228 }' 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.228 [2024-12-12 16:10:26.455358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.228 [2024-12-12 16:10:26.472588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.228 16:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:00.228 [2024-12-12 16:10:26.474602] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.165 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.425 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.425 "name": "raid_bdev1", 00:14:01.425 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:01.425 "strip_size_kb": 0, 00:14:01.425 "state": "online", 00:14:01.425 "raid_level": "raid1", 00:14:01.425 "superblock": false, 00:14:01.425 "num_base_bdevs": 4, 00:14:01.425 "num_base_bdevs_discovered": 4, 00:14:01.425 "num_base_bdevs_operational": 4, 00:14:01.425 "process": { 00:14:01.425 "type": "rebuild", 00:14:01.425 "target": "spare", 00:14:01.425 "progress": { 00:14:01.425 "blocks": 20480, 00:14:01.425 "percent": 31 00:14:01.425 } 00:14:01.425 }, 00:14:01.425 "base_bdevs_list": [ 00:14:01.425 { 00:14:01.425 "name": "spare", 00:14:01.425 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:01.425 "is_configured": true, 00:14:01.425 "data_offset": 0, 00:14:01.425 "data_size": 65536 00:14:01.425 }, 00:14:01.425 { 00:14:01.425 "name": "BaseBdev2", 00:14:01.425 "uuid": "f3b0bc52-4c3f-5f9d-8b34-a96110e50d2e", 00:14:01.426 "is_configured": true, 00:14:01.426 "data_offset": 0, 00:14:01.426 "data_size": 65536 00:14:01.426 }, 00:14:01.426 { 00:14:01.426 "name": "BaseBdev3", 00:14:01.426 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:01.426 "is_configured": true, 00:14:01.426 "data_offset": 0, 00:14:01.426 "data_size": 65536 00:14:01.426 }, 00:14:01.426 { 00:14:01.426 "name": "BaseBdev4", 00:14:01.426 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:01.426 "is_configured": true, 00:14:01.426 "data_offset": 0, 00:14:01.426 "data_size": 65536 00:14:01.426 } 00:14:01.426 ] 00:14:01.426 }' 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.426 [2024-12-12 16:10:27.637311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.426 [2024-12-12 16:10:27.680416] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.426 [2024-12-12 16:10:27.680522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.426 [2024-12-12 16:10:27.680540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.426 [2024-12-12 16:10:27.680550] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.426 "name": "raid_bdev1", 00:14:01.426 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:01.426 "strip_size_kb": 0, 00:14:01.426 "state": "online", 00:14:01.426 "raid_level": "raid1", 00:14:01.426 "superblock": false, 00:14:01.426 "num_base_bdevs": 4, 00:14:01.426 "num_base_bdevs_discovered": 3, 00:14:01.426 "num_base_bdevs_operational": 3, 00:14:01.426 "base_bdevs_list": [ 00:14:01.426 { 00:14:01.426 "name": null, 00:14:01.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.426 "is_configured": false, 00:14:01.426 "data_offset": 0, 00:14:01.426 "data_size": 65536 00:14:01.426 }, 00:14:01.426 { 00:14:01.426 "name": "BaseBdev2", 00:14:01.426 "uuid": "f3b0bc52-4c3f-5f9d-8b34-a96110e50d2e", 00:14:01.426 "is_configured": true, 00:14:01.426 "data_offset": 0, 00:14:01.426 "data_size": 65536 00:14:01.426 }, 00:14:01.426 { 00:14:01.426 "name": "BaseBdev3", 00:14:01.426 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:01.426 "is_configured": true, 00:14:01.426 "data_offset": 0, 00:14:01.426 "data_size": 65536 00:14:01.426 }, 00:14:01.426 { 00:14:01.426 "name": "BaseBdev4", 00:14:01.426 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:01.426 "is_configured": true, 00:14:01.426 "data_offset": 0, 00:14:01.426 "data_size": 65536 00:14:01.426 } 00:14:01.426 ] 00:14:01.426 }' 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.426 16:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.994 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.994 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.994 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.995 "name": "raid_bdev1", 00:14:01.995 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:01.995 "strip_size_kb": 0, 00:14:01.995 "state": "online", 00:14:01.995 "raid_level": "raid1", 00:14:01.995 "superblock": false, 00:14:01.995 "num_base_bdevs": 4, 00:14:01.995 "num_base_bdevs_discovered": 3, 00:14:01.995 "num_base_bdevs_operational": 3, 00:14:01.995 "base_bdevs_list": [ 00:14:01.995 { 00:14:01.995 "name": null, 00:14:01.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.995 "is_configured": false, 00:14:01.995 "data_offset": 0, 00:14:01.995 "data_size": 65536 00:14:01.995 }, 00:14:01.995 { 00:14:01.995 "name": "BaseBdev2", 00:14:01.995 "uuid": "f3b0bc52-4c3f-5f9d-8b34-a96110e50d2e", 00:14:01.995 "is_configured": true, 00:14:01.995 "data_offset": 0, 00:14:01.995 "data_size": 65536 00:14:01.995 }, 00:14:01.995 { 00:14:01.995 "name": "BaseBdev3", 00:14:01.995 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:01.995 "is_configured": true, 00:14:01.995 "data_offset": 0, 00:14:01.995 "data_size": 65536 00:14:01.995 }, 00:14:01.995 { 00:14:01.995 "name": "BaseBdev4", 00:14:01.995 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:01.995 "is_configured": true, 00:14:01.995 "data_offset": 0, 00:14:01.995 "data_size": 65536 00:14:01.995 } 00:14:01.995 ] 00:14:01.995 }' 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.995 [2024-12-12 16:10:28.269070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.995 [2024-12-12 16:10:28.283932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.995 16:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:01.995 [2024-12-12 16:10:28.285804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.375 "name": "raid_bdev1", 00:14:03.375 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:03.375 "strip_size_kb": 0, 00:14:03.375 "state": "online", 00:14:03.375 "raid_level": "raid1", 00:14:03.375 "superblock": false, 00:14:03.375 "num_base_bdevs": 4, 00:14:03.375 "num_base_bdevs_discovered": 4, 00:14:03.375 "num_base_bdevs_operational": 4, 00:14:03.375 "process": { 00:14:03.375 "type": "rebuild", 00:14:03.375 "target": "spare", 00:14:03.375 "progress": { 00:14:03.375 "blocks": 20480, 00:14:03.375 "percent": 31 00:14:03.375 } 00:14:03.375 }, 00:14:03.375 "base_bdevs_list": [ 00:14:03.375 { 00:14:03.375 "name": "spare", 00:14:03.375 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:03.375 "is_configured": true, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 }, 00:14:03.375 { 00:14:03.375 "name": "BaseBdev2", 00:14:03.375 "uuid": "f3b0bc52-4c3f-5f9d-8b34-a96110e50d2e", 00:14:03.375 "is_configured": true, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 }, 00:14:03.375 { 00:14:03.375 "name": "BaseBdev3", 00:14:03.375 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:03.375 "is_configured": true, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 }, 00:14:03.375 { 00:14:03.375 "name": "BaseBdev4", 00:14:03.375 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:03.375 "is_configured": true, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 } 00:14:03.375 ] 00:14:03.375 }' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.375 [2024-12-12 16:10:29.441245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.375 [2024-12-12 16:10:29.491639] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.375 "name": "raid_bdev1", 00:14:03.375 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:03.375 "strip_size_kb": 0, 00:14:03.375 "state": "online", 00:14:03.375 "raid_level": "raid1", 00:14:03.375 "superblock": false, 00:14:03.375 "num_base_bdevs": 4, 00:14:03.375 "num_base_bdevs_discovered": 3, 00:14:03.375 "num_base_bdevs_operational": 3, 00:14:03.375 "process": { 00:14:03.375 "type": "rebuild", 00:14:03.375 "target": "spare", 00:14:03.375 "progress": { 00:14:03.375 "blocks": 24576, 00:14:03.375 "percent": 37 00:14:03.375 } 00:14:03.375 }, 00:14:03.375 "base_bdevs_list": [ 00:14:03.375 { 00:14:03.375 "name": "spare", 00:14:03.375 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:03.375 "is_configured": true, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 }, 00:14:03.375 { 00:14:03.375 "name": null, 00:14:03.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.375 "is_configured": false, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 }, 00:14:03.375 { 00:14:03.375 "name": "BaseBdev3", 00:14:03.375 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:03.375 "is_configured": true, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 }, 00:14:03.375 { 00:14:03.375 "name": "BaseBdev4", 00:14:03.375 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:03.375 "is_configured": true, 00:14:03.375 "data_offset": 0, 00:14:03.375 "data_size": 65536 00:14:03.375 } 00:14:03.375 ] 00:14:03.375 }' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.375 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.375 "name": "raid_bdev1", 00:14:03.375 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:03.375 "strip_size_kb": 0, 00:14:03.375 "state": "online", 00:14:03.375 "raid_level": "raid1", 00:14:03.375 "superblock": false, 00:14:03.375 "num_base_bdevs": 4, 00:14:03.375 "num_base_bdevs_discovered": 3, 00:14:03.375 "num_base_bdevs_operational": 3, 00:14:03.375 "process": { 00:14:03.375 "type": "rebuild", 00:14:03.375 "target": "spare", 00:14:03.375 "progress": { 00:14:03.375 "blocks": 26624, 00:14:03.375 "percent": 40 00:14:03.375 } 00:14:03.375 }, 00:14:03.375 "base_bdevs_list": [ 00:14:03.375 { 00:14:03.375 "name": "spare", 00:14:03.375 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:03.375 "is_configured": true, 00:14:03.376 "data_offset": 0, 00:14:03.376 "data_size": 65536 00:14:03.376 }, 00:14:03.376 { 00:14:03.376 "name": null, 00:14:03.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.376 "is_configured": false, 00:14:03.376 "data_offset": 0, 00:14:03.376 "data_size": 65536 00:14:03.376 }, 00:14:03.376 { 00:14:03.376 "name": "BaseBdev3", 00:14:03.376 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:03.376 "is_configured": true, 00:14:03.376 "data_offset": 0, 00:14:03.376 "data_size": 65536 00:14:03.376 }, 00:14:03.376 { 00:14:03.376 "name": "BaseBdev4", 00:14:03.376 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:03.376 "is_configured": true, 00:14:03.376 "data_offset": 0, 00:14:03.376 "data_size": 65536 00:14:03.376 } 00:14:03.376 ] 00:14:03.376 }' 00:14:03.376 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.635 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.635 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.635 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.635 16:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.572 "name": "raid_bdev1", 00:14:04.572 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:04.572 "strip_size_kb": 0, 00:14:04.572 "state": "online", 00:14:04.572 "raid_level": "raid1", 00:14:04.572 "superblock": false, 00:14:04.572 "num_base_bdevs": 4, 00:14:04.572 "num_base_bdevs_discovered": 3, 00:14:04.572 "num_base_bdevs_operational": 3, 00:14:04.572 "process": { 00:14:04.572 "type": "rebuild", 00:14:04.572 "target": "spare", 00:14:04.572 "progress": { 00:14:04.572 "blocks": 49152, 00:14:04.572 "percent": 75 00:14:04.572 } 00:14:04.572 }, 00:14:04.572 "base_bdevs_list": [ 00:14:04.572 { 00:14:04.572 "name": "spare", 00:14:04.572 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:04.572 "is_configured": true, 00:14:04.572 "data_offset": 0, 00:14:04.572 "data_size": 65536 00:14:04.572 }, 00:14:04.572 { 00:14:04.572 "name": null, 00:14:04.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.572 "is_configured": false, 00:14:04.572 "data_offset": 0, 00:14:04.572 "data_size": 65536 00:14:04.572 }, 00:14:04.572 { 00:14:04.572 "name": "BaseBdev3", 00:14:04.572 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:04.572 "is_configured": true, 00:14:04.572 "data_offset": 0, 00:14:04.572 "data_size": 65536 00:14:04.572 }, 00:14:04.572 { 00:14:04.572 "name": "BaseBdev4", 00:14:04.572 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:04.572 "is_configured": true, 00:14:04.572 "data_offset": 0, 00:14:04.572 "data_size": 65536 00:14:04.572 } 00:14:04.572 ] 00:14:04.572 }' 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.572 16:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.509 [2024-12-12 16:10:31.501428] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:05.509 [2024-12-12 16:10:31.501515] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:05.509 [2024-12-12 16:10:31.501566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.769 "name": "raid_bdev1", 00:14:05.769 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:05.769 "strip_size_kb": 0, 00:14:05.769 "state": "online", 00:14:05.769 "raid_level": "raid1", 00:14:05.769 "superblock": false, 00:14:05.769 "num_base_bdevs": 4, 00:14:05.769 "num_base_bdevs_discovered": 3, 00:14:05.769 "num_base_bdevs_operational": 3, 00:14:05.769 "base_bdevs_list": [ 00:14:05.769 { 00:14:05.769 "name": "spare", 00:14:05.769 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:05.769 "is_configured": true, 00:14:05.769 "data_offset": 0, 00:14:05.769 "data_size": 65536 00:14:05.769 }, 00:14:05.769 { 00:14:05.769 "name": null, 00:14:05.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.769 "is_configured": false, 00:14:05.769 "data_offset": 0, 00:14:05.769 "data_size": 65536 00:14:05.769 }, 00:14:05.769 { 00:14:05.769 "name": "BaseBdev3", 00:14:05.769 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:05.769 "is_configured": true, 00:14:05.769 "data_offset": 0, 00:14:05.769 "data_size": 65536 00:14:05.769 }, 00:14:05.769 { 00:14:05.769 "name": "BaseBdev4", 00:14:05.769 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:05.769 "is_configured": true, 00:14:05.769 "data_offset": 0, 00:14:05.769 "data_size": 65536 00:14:05.769 } 00:14:05.769 ] 00:14:05.769 }' 00:14:05.769 16:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.769 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.769 "name": "raid_bdev1", 00:14:05.769 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:05.769 "strip_size_kb": 0, 00:14:05.769 "state": "online", 00:14:05.769 "raid_level": "raid1", 00:14:05.769 "superblock": false, 00:14:05.769 "num_base_bdevs": 4, 00:14:05.769 "num_base_bdevs_discovered": 3, 00:14:05.769 "num_base_bdevs_operational": 3, 00:14:05.769 "base_bdevs_list": [ 00:14:05.769 { 00:14:05.769 "name": "spare", 00:14:05.769 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:05.769 "is_configured": true, 00:14:05.769 "data_offset": 0, 00:14:05.769 "data_size": 65536 00:14:05.769 }, 00:14:05.769 { 00:14:05.769 "name": null, 00:14:05.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.769 "is_configured": false, 00:14:05.769 "data_offset": 0, 00:14:05.769 "data_size": 65536 00:14:05.769 }, 00:14:05.769 { 00:14:05.769 "name": "BaseBdev3", 00:14:05.769 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:05.769 "is_configured": true, 00:14:05.769 "data_offset": 0, 00:14:05.769 "data_size": 65536 00:14:05.769 }, 00:14:05.769 { 00:14:05.770 "name": "BaseBdev4", 00:14:05.770 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:05.770 "is_configured": true, 00:14:05.770 "data_offset": 0, 00:14:05.770 "data_size": 65536 00:14:05.770 } 00:14:05.770 ] 00:14:05.770 }' 00:14:05.770 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.029 "name": "raid_bdev1", 00:14:06.029 "uuid": "dc52d4e9-2dae-4cc8-8da1-3b3eaafe284f", 00:14:06.029 "strip_size_kb": 0, 00:14:06.029 "state": "online", 00:14:06.029 "raid_level": "raid1", 00:14:06.029 "superblock": false, 00:14:06.029 "num_base_bdevs": 4, 00:14:06.029 "num_base_bdevs_discovered": 3, 00:14:06.029 "num_base_bdevs_operational": 3, 00:14:06.029 "base_bdevs_list": [ 00:14:06.029 { 00:14:06.029 "name": "spare", 00:14:06.029 "uuid": "813d0535-13f1-50f9-8366-8a847ce0e696", 00:14:06.029 "is_configured": true, 00:14:06.029 "data_offset": 0, 00:14:06.029 "data_size": 65536 00:14:06.029 }, 00:14:06.029 { 00:14:06.029 "name": null, 00:14:06.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.029 "is_configured": false, 00:14:06.029 "data_offset": 0, 00:14:06.029 "data_size": 65536 00:14:06.029 }, 00:14:06.029 { 00:14:06.029 "name": "BaseBdev3", 00:14:06.029 "uuid": "8e7775e6-492a-5215-98c4-1e2aec517ba2", 00:14:06.029 "is_configured": true, 00:14:06.029 "data_offset": 0, 00:14:06.029 "data_size": 65536 00:14:06.029 }, 00:14:06.029 { 00:14:06.029 "name": "BaseBdev4", 00:14:06.029 "uuid": "4694a9dc-a22a-51a7-80b7-14bbff2560c0", 00:14:06.029 "is_configured": true, 00:14:06.029 "data_offset": 0, 00:14:06.029 "data_size": 65536 00:14:06.029 } 00:14:06.029 ] 00:14:06.029 }' 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.029 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.288 [2024-12-12 16:10:32.601739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.288 [2024-12-12 16:10:32.601778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.288 [2024-12-12 16:10:32.601875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.288 [2024-12-12 16:10:32.601977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.288 [2024-12-12 16:10:32.601994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.288 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:06.547 /dev/nbd0 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.547 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:06.548 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:06.548 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.548 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.548 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.548 1+0 records in 00:14:06.548 1+0 records out 00:14:06.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417097 s, 9.8 MB/s 00:14:06.548 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.548 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:06.548 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.807 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.807 16:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:06.807 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.807 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.807 16:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:06.807 /dev/nbd1 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.807 1+0 records in 00:14:06.807 1+0 records out 00:14:06.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037497 s, 10.9 MB/s 00:14:06.807 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.065 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.324 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.583 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.583 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.583 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.583 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79633 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 79633 ']' 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 79633 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79633 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79633' 00:14:07.584 killing process with pid 79633 00:14:07.584 Received shutdown signal, test time was about 60.000000 seconds 00:14:07.584 00:14:07.584 Latency(us) 00:14:07.584 [2024-12-12T16:10:33.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.584 [2024-12-12T16:10:33.936Z] =================================================================================================================== 00:14:07.584 [2024-12-12T16:10:33.936Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 79633 00:14:07.584 [2024-12-12 16:10:33.763593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.584 16:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 79633 00:14:08.152 [2024-12-12 16:10:34.249697] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:09.089 00:14:09.089 real 0m17.643s 00:14:09.089 user 0m19.386s 00:14:09.089 sys 0m3.305s 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.089 ************************************ 00:14:09.089 END TEST raid_rebuild_test 00:14:09.089 ************************************ 00:14:09.089 16:10:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:09.089 16:10:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:09.089 16:10:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.089 16:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.089 ************************************ 00:14:09.089 START TEST raid_rebuild_test_sb 00:14:09.089 ************************************ 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.089 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=80079 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 80079 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80079 ']' 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.090 16:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.349 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:09.349 Zero copy mechanism will not be used. 00:14:09.349 [2024-12-12 16:10:35.510803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:09.349 [2024-12-12 16:10:35.510930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80079 ] 00:14:09.349 [2024-12-12 16:10:35.679394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.607 [2024-12-12 16:10:35.798852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.866 [2024-12-12 16:10:35.994779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.866 [2024-12-12 16:10:35.994842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.126 BaseBdev1_malloc 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.126 [2024-12-12 16:10:36.364690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:10.126 [2024-12-12 16:10:36.364801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.126 [2024-12-12 16:10:36.364839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:10.126 [2024-12-12 16:10:36.364851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.126 [2024-12-12 16:10:36.366959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.126 [2024-12-12 16:10:36.367004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.126 BaseBdev1 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.126 BaseBdev2_malloc 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.126 [2024-12-12 16:10:36.418553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:10.126 [2024-12-12 16:10:36.418614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.126 [2024-12-12 16:10:36.418633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:10.126 [2024-12-12 16:10:36.418644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.126 [2024-12-12 16:10:36.420731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.126 [2024-12-12 16:10:36.420771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.126 BaseBdev2 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.126 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 BaseBdev3_malloc 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 [2024-12-12 16:10:36.485970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:10.388 [2024-12-12 16:10:36.486023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.388 [2024-12-12 16:10:36.486061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:10.388 [2024-12-12 16:10:36.486072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.388 [2024-12-12 16:10:36.488085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.388 [2024-12-12 16:10:36.488125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.388 BaseBdev3 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 BaseBdev4_malloc 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 [2024-12-12 16:10:36.540210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:10.388 [2024-12-12 16:10:36.540310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.388 [2024-12-12 16:10:36.540335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:10.388 [2024-12-12 16:10:36.540345] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.388 [2024-12-12 16:10:36.542315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.388 [2024-12-12 16:10:36.542357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:10.388 BaseBdev4 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 spare_malloc 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 spare_delay 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 [2024-12-12 16:10:36.597379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.388 [2024-12-12 16:10:36.597451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.388 [2024-12-12 16:10:36.597470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:10.388 [2024-12-12 16:10:36.597480] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.388 [2024-12-12 16:10:36.599588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.388 [2024-12-12 16:10:36.599700] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.388 spare 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.388 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 [2024-12-12 16:10:36.609410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.388 [2024-12-12 16:10:36.611206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.388 [2024-12-12 16:10:36.611273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.388 [2024-12-12 16:10:36.611325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.388 [2024-12-12 16:10:36.611520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:10.388 [2024-12-12 16:10:36.611546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.388 [2024-12-12 16:10:36.611825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:10.388 [2024-12-12 16:10:36.612012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:10.389 [2024-12-12 16:10:36.612024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:10.389 [2024-12-12 16:10:36.612164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.389 "name": "raid_bdev1", 00:14:10.389 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:10.389 "strip_size_kb": 0, 00:14:10.389 "state": "online", 00:14:10.389 "raid_level": "raid1", 00:14:10.389 "superblock": true, 00:14:10.389 "num_base_bdevs": 4, 00:14:10.389 "num_base_bdevs_discovered": 4, 00:14:10.389 "num_base_bdevs_operational": 4, 00:14:10.389 "base_bdevs_list": [ 00:14:10.389 { 00:14:10.389 "name": "BaseBdev1", 00:14:10.389 "uuid": "c5ecd3fb-c7ab-590a-ab16-21e31ed275d9", 00:14:10.389 "is_configured": true, 00:14:10.389 "data_offset": 2048, 00:14:10.389 "data_size": 63488 00:14:10.389 }, 00:14:10.389 { 00:14:10.389 "name": "BaseBdev2", 00:14:10.389 "uuid": "013df1df-2820-53e4-8a01-73a515ac14e8", 00:14:10.389 "is_configured": true, 00:14:10.389 "data_offset": 2048, 00:14:10.389 "data_size": 63488 00:14:10.389 }, 00:14:10.389 { 00:14:10.389 "name": "BaseBdev3", 00:14:10.389 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:10.389 "is_configured": true, 00:14:10.389 "data_offset": 2048, 00:14:10.389 "data_size": 63488 00:14:10.389 }, 00:14:10.389 { 00:14:10.389 "name": "BaseBdev4", 00:14:10.389 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:10.389 "is_configured": true, 00:14:10.389 "data_offset": 2048, 00:14:10.389 "data_size": 63488 00:14:10.389 } 00:14:10.389 ] 00:14:10.389 }' 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.389 16:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:10.958 [2024-12-12 16:10:37.057063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:10.958 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:11.216 [2024-12-12 16:10:37.320364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:11.216 /dev/nbd0 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.216 1+0 records in 00:14:11.216 1+0 records out 00:14:11.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315706 s, 13.0 MB/s 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:11.216 16:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:17.792 63488+0 records in 00:14:17.792 63488+0 records out 00:14:17.792 32505856 bytes (33 MB, 31 MiB) copied, 5.81979 s, 5.6 MB/s 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.792 [2024-12-12 16:10:43.392222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.792 [2024-12-12 16:10:43.424297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.792 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.793 "name": "raid_bdev1", 00:14:17.793 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:17.793 "strip_size_kb": 0, 00:14:17.793 "state": "online", 00:14:17.793 "raid_level": "raid1", 00:14:17.793 "superblock": true, 00:14:17.793 "num_base_bdevs": 4, 00:14:17.793 "num_base_bdevs_discovered": 3, 00:14:17.793 "num_base_bdevs_operational": 3, 00:14:17.793 "base_bdevs_list": [ 00:14:17.793 { 00:14:17.793 "name": null, 00:14:17.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.793 "is_configured": false, 00:14:17.793 "data_offset": 0, 00:14:17.793 "data_size": 63488 00:14:17.793 }, 00:14:17.793 { 00:14:17.793 "name": "BaseBdev2", 00:14:17.793 "uuid": "013df1df-2820-53e4-8a01-73a515ac14e8", 00:14:17.793 "is_configured": true, 00:14:17.793 "data_offset": 2048, 00:14:17.793 "data_size": 63488 00:14:17.793 }, 00:14:17.793 { 00:14:17.793 "name": "BaseBdev3", 00:14:17.793 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:17.793 "is_configured": true, 00:14:17.793 "data_offset": 2048, 00:14:17.793 "data_size": 63488 00:14:17.793 }, 00:14:17.793 { 00:14:17.793 "name": "BaseBdev4", 00:14:17.793 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:17.793 "is_configured": true, 00:14:17.793 "data_offset": 2048, 00:14:17.793 "data_size": 63488 00:14:17.793 } 00:14:17.793 ] 00:14:17.793 }' 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.793 [2024-12-12 16:10:43.844132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.793 [2024-12-12 16:10:43.863676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.793 16:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:17.793 [2024-12-12 16:10:43.866083] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.733 "name": "raid_bdev1", 00:14:18.733 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:18.733 "strip_size_kb": 0, 00:14:18.733 "state": "online", 00:14:18.733 "raid_level": "raid1", 00:14:18.733 "superblock": true, 00:14:18.733 "num_base_bdevs": 4, 00:14:18.733 "num_base_bdevs_discovered": 4, 00:14:18.733 "num_base_bdevs_operational": 4, 00:14:18.733 "process": { 00:14:18.733 "type": "rebuild", 00:14:18.733 "target": "spare", 00:14:18.733 "progress": { 00:14:18.733 "blocks": 20480, 00:14:18.733 "percent": 32 00:14:18.733 } 00:14:18.733 }, 00:14:18.733 "base_bdevs_list": [ 00:14:18.733 { 00:14:18.733 "name": "spare", 00:14:18.733 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:18.733 "is_configured": true, 00:14:18.733 "data_offset": 2048, 00:14:18.733 "data_size": 63488 00:14:18.733 }, 00:14:18.733 { 00:14:18.733 "name": "BaseBdev2", 00:14:18.733 "uuid": "013df1df-2820-53e4-8a01-73a515ac14e8", 00:14:18.733 "is_configured": true, 00:14:18.733 "data_offset": 2048, 00:14:18.733 "data_size": 63488 00:14:18.733 }, 00:14:18.733 { 00:14:18.733 "name": "BaseBdev3", 00:14:18.733 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:18.733 "is_configured": true, 00:14:18.733 "data_offset": 2048, 00:14:18.733 "data_size": 63488 00:14:18.733 }, 00:14:18.733 { 00:14:18.733 "name": "BaseBdev4", 00:14:18.733 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:18.733 "is_configured": true, 00:14:18.733 "data_offset": 2048, 00:14:18.733 "data_size": 63488 00:14:18.733 } 00:14:18.733 ] 00:14:18.733 }' 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.733 16:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.733 [2024-12-12 16:10:44.996867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.733 [2024-12-12 16:10:45.076314] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.733 [2024-12-12 16:10:45.076456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.733 [2024-12-12 16:10:45.076502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.733 [2024-12-12 16:10:45.076533] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.992 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.992 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.992 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.992 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.992 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.992 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.992 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.993 "name": "raid_bdev1", 00:14:18.993 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:18.993 "strip_size_kb": 0, 00:14:18.993 "state": "online", 00:14:18.993 "raid_level": "raid1", 00:14:18.993 "superblock": true, 00:14:18.993 "num_base_bdevs": 4, 00:14:18.993 "num_base_bdevs_discovered": 3, 00:14:18.993 "num_base_bdevs_operational": 3, 00:14:18.993 "base_bdevs_list": [ 00:14:18.993 { 00:14:18.993 "name": null, 00:14:18.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.993 "is_configured": false, 00:14:18.993 "data_offset": 0, 00:14:18.993 "data_size": 63488 00:14:18.993 }, 00:14:18.993 { 00:14:18.993 "name": "BaseBdev2", 00:14:18.993 "uuid": "013df1df-2820-53e4-8a01-73a515ac14e8", 00:14:18.993 "is_configured": true, 00:14:18.993 "data_offset": 2048, 00:14:18.993 "data_size": 63488 00:14:18.993 }, 00:14:18.993 { 00:14:18.993 "name": "BaseBdev3", 00:14:18.993 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:18.993 "is_configured": true, 00:14:18.993 "data_offset": 2048, 00:14:18.993 "data_size": 63488 00:14:18.993 }, 00:14:18.993 { 00:14:18.993 "name": "BaseBdev4", 00:14:18.993 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:18.993 "is_configured": true, 00:14:18.993 "data_offset": 2048, 00:14:18.993 "data_size": 63488 00:14:18.993 } 00:14:18.993 ] 00:14:18.993 }' 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.993 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.253 "name": "raid_bdev1", 00:14:19.253 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:19.253 "strip_size_kb": 0, 00:14:19.253 "state": "online", 00:14:19.253 "raid_level": "raid1", 00:14:19.253 "superblock": true, 00:14:19.253 "num_base_bdevs": 4, 00:14:19.253 "num_base_bdevs_discovered": 3, 00:14:19.253 "num_base_bdevs_operational": 3, 00:14:19.253 "base_bdevs_list": [ 00:14:19.253 { 00:14:19.253 "name": null, 00:14:19.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.253 "is_configured": false, 00:14:19.253 "data_offset": 0, 00:14:19.253 "data_size": 63488 00:14:19.253 }, 00:14:19.253 { 00:14:19.253 "name": "BaseBdev2", 00:14:19.253 "uuid": "013df1df-2820-53e4-8a01-73a515ac14e8", 00:14:19.253 "is_configured": true, 00:14:19.253 "data_offset": 2048, 00:14:19.253 "data_size": 63488 00:14:19.253 }, 00:14:19.253 { 00:14:19.253 "name": "BaseBdev3", 00:14:19.253 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:19.253 "is_configured": true, 00:14:19.253 "data_offset": 2048, 00:14:19.253 "data_size": 63488 00:14:19.253 }, 00:14:19.253 { 00:14:19.253 "name": "BaseBdev4", 00:14:19.253 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:19.253 "is_configured": true, 00:14:19.253 "data_offset": 2048, 00:14:19.253 "data_size": 63488 00:14:19.253 } 00:14:19.253 ] 00:14:19.253 }' 00:14:19.253 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.513 [2024-12-12 16:10:45.663974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.513 [2024-12-12 16:10:45.678680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.513 16:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:19.513 [2024-12-12 16:10:45.680953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.451 "name": "raid_bdev1", 00:14:20.451 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:20.451 "strip_size_kb": 0, 00:14:20.451 "state": "online", 00:14:20.451 "raid_level": "raid1", 00:14:20.451 "superblock": true, 00:14:20.451 "num_base_bdevs": 4, 00:14:20.451 "num_base_bdevs_discovered": 4, 00:14:20.451 "num_base_bdevs_operational": 4, 00:14:20.451 "process": { 00:14:20.451 "type": "rebuild", 00:14:20.451 "target": "spare", 00:14:20.451 "progress": { 00:14:20.451 "blocks": 20480, 00:14:20.451 "percent": 32 00:14:20.451 } 00:14:20.451 }, 00:14:20.451 "base_bdevs_list": [ 00:14:20.451 { 00:14:20.451 "name": "spare", 00:14:20.451 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:20.451 "is_configured": true, 00:14:20.451 "data_offset": 2048, 00:14:20.451 "data_size": 63488 00:14:20.451 }, 00:14:20.451 { 00:14:20.451 "name": "BaseBdev2", 00:14:20.451 "uuid": "013df1df-2820-53e4-8a01-73a515ac14e8", 00:14:20.451 "is_configured": true, 00:14:20.451 "data_offset": 2048, 00:14:20.451 "data_size": 63488 00:14:20.451 }, 00:14:20.451 { 00:14:20.451 "name": "BaseBdev3", 00:14:20.451 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:20.451 "is_configured": true, 00:14:20.451 "data_offset": 2048, 00:14:20.451 "data_size": 63488 00:14:20.451 }, 00:14:20.451 { 00:14:20.451 "name": "BaseBdev4", 00:14:20.451 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:20.451 "is_configured": true, 00:14:20.451 "data_offset": 2048, 00:14:20.451 "data_size": 63488 00:14:20.451 } 00:14:20.451 ] 00:14:20.451 }' 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:20.451 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.451 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.451 [2024-12-12 16:10:46.800320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.711 [2024-12-12 16:10:46.990783] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.711 16:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.711 16:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.711 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.711 "name": "raid_bdev1", 00:14:20.711 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:20.711 "strip_size_kb": 0, 00:14:20.711 "state": "online", 00:14:20.711 "raid_level": "raid1", 00:14:20.711 "superblock": true, 00:14:20.711 "num_base_bdevs": 4, 00:14:20.711 "num_base_bdevs_discovered": 3, 00:14:20.711 "num_base_bdevs_operational": 3, 00:14:20.711 "process": { 00:14:20.711 "type": "rebuild", 00:14:20.711 "target": "spare", 00:14:20.711 "progress": { 00:14:20.711 "blocks": 24576, 00:14:20.711 "percent": 38 00:14:20.711 } 00:14:20.711 }, 00:14:20.711 "base_bdevs_list": [ 00:14:20.711 { 00:14:20.711 "name": "spare", 00:14:20.711 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:20.711 "is_configured": true, 00:14:20.711 "data_offset": 2048, 00:14:20.711 "data_size": 63488 00:14:20.711 }, 00:14:20.711 { 00:14:20.711 "name": null, 00:14:20.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.711 "is_configured": false, 00:14:20.711 "data_offset": 0, 00:14:20.711 "data_size": 63488 00:14:20.711 }, 00:14:20.711 { 00:14:20.711 "name": "BaseBdev3", 00:14:20.711 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:20.711 "is_configured": true, 00:14:20.711 "data_offset": 2048, 00:14:20.711 "data_size": 63488 00:14:20.711 }, 00:14:20.711 { 00:14:20.711 "name": "BaseBdev4", 00:14:20.711 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:20.711 "is_configured": true, 00:14:20.711 "data_offset": 2048, 00:14:20.711 "data_size": 63488 00:14:20.711 } 00:14:20.711 ] 00:14:20.711 }' 00:14:20.711 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.971 "name": "raid_bdev1", 00:14:20.971 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:20.971 "strip_size_kb": 0, 00:14:20.971 "state": "online", 00:14:20.971 "raid_level": "raid1", 00:14:20.971 "superblock": true, 00:14:20.971 "num_base_bdevs": 4, 00:14:20.971 "num_base_bdevs_discovered": 3, 00:14:20.971 "num_base_bdevs_operational": 3, 00:14:20.971 "process": { 00:14:20.971 "type": "rebuild", 00:14:20.971 "target": "spare", 00:14:20.971 "progress": { 00:14:20.971 "blocks": 26624, 00:14:20.971 "percent": 41 00:14:20.971 } 00:14:20.971 }, 00:14:20.971 "base_bdevs_list": [ 00:14:20.971 { 00:14:20.971 "name": "spare", 00:14:20.971 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:20.971 "is_configured": true, 00:14:20.971 "data_offset": 2048, 00:14:20.971 "data_size": 63488 00:14:20.971 }, 00:14:20.971 { 00:14:20.971 "name": null, 00:14:20.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.971 "is_configured": false, 00:14:20.971 "data_offset": 0, 00:14:20.971 "data_size": 63488 00:14:20.971 }, 00:14:20.971 { 00:14:20.971 "name": "BaseBdev3", 00:14:20.971 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:20.971 "is_configured": true, 00:14:20.971 "data_offset": 2048, 00:14:20.971 "data_size": 63488 00:14:20.971 }, 00:14:20.971 { 00:14:20.971 "name": "BaseBdev4", 00:14:20.971 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:20.971 "is_configured": true, 00:14:20.971 "data_offset": 2048, 00:14:20.971 "data_size": 63488 00:14:20.971 } 00:14:20.971 ] 00:14:20.971 }' 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.971 16:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.352 "name": "raid_bdev1", 00:14:22.352 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:22.352 "strip_size_kb": 0, 00:14:22.352 "state": "online", 00:14:22.352 "raid_level": "raid1", 00:14:22.352 "superblock": true, 00:14:22.352 "num_base_bdevs": 4, 00:14:22.352 "num_base_bdevs_discovered": 3, 00:14:22.352 "num_base_bdevs_operational": 3, 00:14:22.352 "process": { 00:14:22.352 "type": "rebuild", 00:14:22.352 "target": "spare", 00:14:22.352 "progress": { 00:14:22.352 "blocks": 49152, 00:14:22.352 "percent": 77 00:14:22.352 } 00:14:22.352 }, 00:14:22.352 "base_bdevs_list": [ 00:14:22.352 { 00:14:22.352 "name": "spare", 00:14:22.352 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:22.352 "is_configured": true, 00:14:22.352 "data_offset": 2048, 00:14:22.352 "data_size": 63488 00:14:22.352 }, 00:14:22.352 { 00:14:22.352 "name": null, 00:14:22.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.352 "is_configured": false, 00:14:22.352 "data_offset": 0, 00:14:22.352 "data_size": 63488 00:14:22.352 }, 00:14:22.352 { 00:14:22.352 "name": "BaseBdev3", 00:14:22.352 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:22.352 "is_configured": true, 00:14:22.352 "data_offset": 2048, 00:14:22.352 "data_size": 63488 00:14:22.352 }, 00:14:22.352 { 00:14:22.352 "name": "BaseBdev4", 00:14:22.352 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:22.352 "is_configured": true, 00:14:22.352 "data_offset": 2048, 00:14:22.352 "data_size": 63488 00:14:22.352 } 00:14:22.352 ] 00:14:22.352 }' 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.352 16:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.612 [2024-12-12 16:10:48.906617] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:22.612 [2024-12-12 16:10:48.906781] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:22.612 [2024-12-12 16:10:48.906931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.182 "name": "raid_bdev1", 00:14:23.182 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:23.182 "strip_size_kb": 0, 00:14:23.182 "state": "online", 00:14:23.182 "raid_level": "raid1", 00:14:23.182 "superblock": true, 00:14:23.182 "num_base_bdevs": 4, 00:14:23.182 "num_base_bdevs_discovered": 3, 00:14:23.182 "num_base_bdevs_operational": 3, 00:14:23.182 "base_bdevs_list": [ 00:14:23.182 { 00:14:23.182 "name": "spare", 00:14:23.182 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:23.182 "is_configured": true, 00:14:23.182 "data_offset": 2048, 00:14:23.182 "data_size": 63488 00:14:23.182 }, 00:14:23.182 { 00:14:23.182 "name": null, 00:14:23.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.182 "is_configured": false, 00:14:23.182 "data_offset": 0, 00:14:23.182 "data_size": 63488 00:14:23.182 }, 00:14:23.182 { 00:14:23.182 "name": "BaseBdev3", 00:14:23.182 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:23.182 "is_configured": true, 00:14:23.182 "data_offset": 2048, 00:14:23.182 "data_size": 63488 00:14:23.182 }, 00:14:23.182 { 00:14:23.182 "name": "BaseBdev4", 00:14:23.182 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:23.182 "is_configured": true, 00:14:23.182 "data_offset": 2048, 00:14:23.182 "data_size": 63488 00:14:23.182 } 00:14:23.182 ] 00:14:23.182 }' 00:14:23.182 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.442 "name": "raid_bdev1", 00:14:23.442 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:23.442 "strip_size_kb": 0, 00:14:23.442 "state": "online", 00:14:23.442 "raid_level": "raid1", 00:14:23.442 "superblock": true, 00:14:23.442 "num_base_bdevs": 4, 00:14:23.442 "num_base_bdevs_discovered": 3, 00:14:23.442 "num_base_bdevs_operational": 3, 00:14:23.442 "base_bdevs_list": [ 00:14:23.442 { 00:14:23.442 "name": "spare", 00:14:23.442 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:23.442 "is_configured": true, 00:14:23.442 "data_offset": 2048, 00:14:23.442 "data_size": 63488 00:14:23.442 }, 00:14:23.442 { 00:14:23.442 "name": null, 00:14:23.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.442 "is_configured": false, 00:14:23.442 "data_offset": 0, 00:14:23.442 "data_size": 63488 00:14:23.442 }, 00:14:23.442 { 00:14:23.442 "name": "BaseBdev3", 00:14:23.442 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:23.442 "is_configured": true, 00:14:23.442 "data_offset": 2048, 00:14:23.442 "data_size": 63488 00:14:23.442 }, 00:14:23.442 { 00:14:23.442 "name": "BaseBdev4", 00:14:23.442 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:23.442 "is_configured": true, 00:14:23.442 "data_offset": 2048, 00:14:23.442 "data_size": 63488 00:14:23.442 } 00:14:23.442 ] 00:14:23.442 }' 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.442 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.442 "name": "raid_bdev1", 00:14:23.442 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:23.442 "strip_size_kb": 0, 00:14:23.442 "state": "online", 00:14:23.442 "raid_level": "raid1", 00:14:23.442 "superblock": true, 00:14:23.442 "num_base_bdevs": 4, 00:14:23.442 "num_base_bdevs_discovered": 3, 00:14:23.442 "num_base_bdevs_operational": 3, 00:14:23.442 "base_bdevs_list": [ 00:14:23.442 { 00:14:23.442 "name": "spare", 00:14:23.442 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:23.442 "is_configured": true, 00:14:23.442 "data_offset": 2048, 00:14:23.442 "data_size": 63488 00:14:23.442 }, 00:14:23.442 { 00:14:23.442 "name": null, 00:14:23.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.442 "is_configured": false, 00:14:23.442 "data_offset": 0, 00:14:23.442 "data_size": 63488 00:14:23.442 }, 00:14:23.442 { 00:14:23.442 "name": "BaseBdev3", 00:14:23.442 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:23.443 "is_configured": true, 00:14:23.443 "data_offset": 2048, 00:14:23.443 "data_size": 63488 00:14:23.443 }, 00:14:23.443 { 00:14:23.443 "name": "BaseBdev4", 00:14:23.443 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:23.443 "is_configured": true, 00:14:23.443 "data_offset": 2048, 00:14:23.443 "data_size": 63488 00:14:23.443 } 00:14:23.443 ] 00:14:23.443 }' 00:14:23.443 16:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.443 16:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.012 [2024-12-12 16:10:50.108033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.012 [2024-12-12 16:10:50.108156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.012 [2024-12-12 16:10:50.108289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.012 [2024-12-12 16:10:50.108427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.012 [2024-12-12 16:10:50.108483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.012 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.012 /dev/nbd0 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.272 1+0 records in 00:14:24.272 1+0 records out 00:14:24.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045765 s, 9.0 MB/s 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:24.272 /dev/nbd1 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.272 1+0 records in 00:14:24.272 1+0 records out 00:14:24.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243464 s, 16.8 MB/s 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.272 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:24.532 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:24.532 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.532 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.532 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.532 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:24.532 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.532 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.792 16:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.792 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.052 [2024-12-12 16:10:51.235550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.052 [2024-12-12 16:10:51.235624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.052 [2024-12-12 16:10:51.235653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:25.052 [2024-12-12 16:10:51.235664] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.052 [2024-12-12 16:10:51.238170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.052 [2024-12-12 16:10:51.238213] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.052 [2024-12-12 16:10:51.238317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:25.052 [2024-12-12 16:10:51.238373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.052 [2024-12-12 16:10:51.238526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.052 [2024-12-12 16:10:51.238624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.052 spare 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.052 [2024-12-12 16:10:51.338524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:25.052 [2024-12-12 16:10:51.338552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.052 [2024-12-12 16:10:51.338850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:25.052 [2024-12-12 16:10:51.339087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:25.052 [2024-12-12 16:10:51.339115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:25.052 [2024-12-12 16:10:51.339303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.052 "name": "raid_bdev1", 00:14:25.052 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:25.052 "strip_size_kb": 0, 00:14:25.052 "state": "online", 00:14:25.052 "raid_level": "raid1", 00:14:25.052 "superblock": true, 00:14:25.052 "num_base_bdevs": 4, 00:14:25.052 "num_base_bdevs_discovered": 3, 00:14:25.052 "num_base_bdevs_operational": 3, 00:14:25.052 "base_bdevs_list": [ 00:14:25.052 { 00:14:25.052 "name": "spare", 00:14:25.052 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:25.052 "is_configured": true, 00:14:25.052 "data_offset": 2048, 00:14:25.052 "data_size": 63488 00:14:25.052 }, 00:14:25.052 { 00:14:25.052 "name": null, 00:14:25.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.052 "is_configured": false, 00:14:25.052 "data_offset": 2048, 00:14:25.052 "data_size": 63488 00:14:25.052 }, 00:14:25.052 { 00:14:25.052 "name": "BaseBdev3", 00:14:25.052 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:25.052 "is_configured": true, 00:14:25.052 "data_offset": 2048, 00:14:25.052 "data_size": 63488 00:14:25.052 }, 00:14:25.052 { 00:14:25.052 "name": "BaseBdev4", 00:14:25.052 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:25.052 "is_configured": true, 00:14:25.052 "data_offset": 2048, 00:14:25.052 "data_size": 63488 00:14:25.052 } 00:14:25.052 ] 00:14:25.052 }' 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.052 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.622 "name": "raid_bdev1", 00:14:25.622 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:25.622 "strip_size_kb": 0, 00:14:25.622 "state": "online", 00:14:25.622 "raid_level": "raid1", 00:14:25.622 "superblock": true, 00:14:25.622 "num_base_bdevs": 4, 00:14:25.622 "num_base_bdevs_discovered": 3, 00:14:25.622 "num_base_bdevs_operational": 3, 00:14:25.622 "base_bdevs_list": [ 00:14:25.622 { 00:14:25.622 "name": "spare", 00:14:25.622 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:25.622 "is_configured": true, 00:14:25.622 "data_offset": 2048, 00:14:25.622 "data_size": 63488 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "name": null, 00:14:25.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.622 "is_configured": false, 00:14:25.622 "data_offset": 2048, 00:14:25.622 "data_size": 63488 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "name": "BaseBdev3", 00:14:25.622 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:25.622 "is_configured": true, 00:14:25.622 "data_offset": 2048, 00:14:25.622 "data_size": 63488 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "name": "BaseBdev4", 00:14:25.622 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:25.622 "is_configured": true, 00:14:25.622 "data_offset": 2048, 00:14:25.622 "data_size": 63488 00:14:25.622 } 00:14:25.622 ] 00:14:25.622 }' 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.622 [2024-12-12 16:10:51.926438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.622 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.882 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.882 "name": "raid_bdev1", 00:14:25.882 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:25.882 "strip_size_kb": 0, 00:14:25.882 "state": "online", 00:14:25.882 "raid_level": "raid1", 00:14:25.882 "superblock": true, 00:14:25.882 "num_base_bdevs": 4, 00:14:25.882 "num_base_bdevs_discovered": 2, 00:14:25.882 "num_base_bdevs_operational": 2, 00:14:25.882 "base_bdevs_list": [ 00:14:25.882 { 00:14:25.882 "name": null, 00:14:25.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.882 "is_configured": false, 00:14:25.882 "data_offset": 0, 00:14:25.882 "data_size": 63488 00:14:25.882 }, 00:14:25.882 { 00:14:25.882 "name": null, 00:14:25.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.882 "is_configured": false, 00:14:25.882 "data_offset": 2048, 00:14:25.882 "data_size": 63488 00:14:25.882 }, 00:14:25.882 { 00:14:25.882 "name": "BaseBdev3", 00:14:25.882 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:25.882 "is_configured": true, 00:14:25.882 "data_offset": 2048, 00:14:25.882 "data_size": 63488 00:14:25.882 }, 00:14:25.882 { 00:14:25.882 "name": "BaseBdev4", 00:14:25.882 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:25.882 "is_configured": true, 00:14:25.882 "data_offset": 2048, 00:14:25.882 "data_size": 63488 00:14:25.882 } 00:14:25.882 ] 00:14:25.882 }' 00:14:25.882 16:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.882 16:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.143 16:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.143 16:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.143 16:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.143 [2024-12-12 16:10:52.349771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.143 [2024-12-12 16:10:52.350035] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:26.143 [2024-12-12 16:10:52.350053] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.143 [2024-12-12 16:10:52.350098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.143 [2024-12-12 16:10:52.364562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:26.143 16:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.143 16:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:26.143 [2024-12-12 16:10:52.366713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.081 "name": "raid_bdev1", 00:14:27.081 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:27.081 "strip_size_kb": 0, 00:14:27.081 "state": "online", 00:14:27.081 "raid_level": "raid1", 00:14:27.081 "superblock": true, 00:14:27.081 "num_base_bdevs": 4, 00:14:27.081 "num_base_bdevs_discovered": 3, 00:14:27.081 "num_base_bdevs_operational": 3, 00:14:27.081 "process": { 00:14:27.081 "type": "rebuild", 00:14:27.081 "target": "spare", 00:14:27.081 "progress": { 00:14:27.081 "blocks": 20480, 00:14:27.081 "percent": 32 00:14:27.081 } 00:14:27.081 }, 00:14:27.081 "base_bdevs_list": [ 00:14:27.081 { 00:14:27.081 "name": "spare", 00:14:27.081 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:27.081 "is_configured": true, 00:14:27.081 "data_offset": 2048, 00:14:27.081 "data_size": 63488 00:14:27.081 }, 00:14:27.081 { 00:14:27.081 "name": null, 00:14:27.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.081 "is_configured": false, 00:14:27.081 "data_offset": 2048, 00:14:27.081 "data_size": 63488 00:14:27.081 }, 00:14:27.081 { 00:14:27.081 "name": "BaseBdev3", 00:14:27.081 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:27.081 "is_configured": true, 00:14:27.081 "data_offset": 2048, 00:14:27.081 "data_size": 63488 00:14:27.081 }, 00:14:27.081 { 00:14:27.081 "name": "BaseBdev4", 00:14:27.081 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:27.081 "is_configured": true, 00:14:27.081 "data_offset": 2048, 00:14:27.081 "data_size": 63488 00:14:27.081 } 00:14:27.081 ] 00:14:27.081 }' 00:14:27.081 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.341 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.341 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.342 [2024-12-12 16:10:53.534082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.342 [2024-12-12 16:10:53.575560] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.342 [2024-12-12 16:10:53.575639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.342 [2024-12-12 16:10:53.575662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.342 [2024-12-12 16:10:53.575671] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.342 "name": "raid_bdev1", 00:14:27.342 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:27.342 "strip_size_kb": 0, 00:14:27.342 "state": "online", 00:14:27.342 "raid_level": "raid1", 00:14:27.342 "superblock": true, 00:14:27.342 "num_base_bdevs": 4, 00:14:27.342 "num_base_bdevs_discovered": 2, 00:14:27.342 "num_base_bdevs_operational": 2, 00:14:27.342 "base_bdevs_list": [ 00:14:27.342 { 00:14:27.342 "name": null, 00:14:27.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.342 "is_configured": false, 00:14:27.342 "data_offset": 0, 00:14:27.342 "data_size": 63488 00:14:27.342 }, 00:14:27.342 { 00:14:27.342 "name": null, 00:14:27.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.342 "is_configured": false, 00:14:27.342 "data_offset": 2048, 00:14:27.342 "data_size": 63488 00:14:27.342 }, 00:14:27.342 { 00:14:27.342 "name": "BaseBdev3", 00:14:27.342 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:27.342 "is_configured": true, 00:14:27.342 "data_offset": 2048, 00:14:27.342 "data_size": 63488 00:14:27.342 }, 00:14:27.342 { 00:14:27.342 "name": "BaseBdev4", 00:14:27.342 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:27.342 "is_configured": true, 00:14:27.342 "data_offset": 2048, 00:14:27.342 "data_size": 63488 00:14:27.342 } 00:14:27.342 ] 00:14:27.342 }' 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.342 16:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.912 16:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.912 16:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.912 16:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.912 [2024-12-12 16:10:54.030169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.912 [2024-12-12 16:10:54.030323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.912 [2024-12-12 16:10:54.030377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:27.912 [2024-12-12 16:10:54.030417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.912 [2024-12-12 16:10:54.031033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.912 [2024-12-12 16:10:54.031114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.912 [2024-12-12 16:10:54.031261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:27.912 [2024-12-12 16:10:54.031322] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:27.912 [2024-12-12 16:10:54.031393] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:27.912 [2024-12-12 16:10:54.031480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.912 [2024-12-12 16:10:54.045429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:27.912 spare 00:14:27.912 16:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.912 16:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:27.912 [2024-12-12 16:10:54.047619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.853 "name": "raid_bdev1", 00:14:28.853 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:28.853 "strip_size_kb": 0, 00:14:28.853 "state": "online", 00:14:28.853 "raid_level": "raid1", 00:14:28.853 "superblock": true, 00:14:28.853 "num_base_bdevs": 4, 00:14:28.853 "num_base_bdevs_discovered": 3, 00:14:28.853 "num_base_bdevs_operational": 3, 00:14:28.853 "process": { 00:14:28.853 "type": "rebuild", 00:14:28.853 "target": "spare", 00:14:28.853 "progress": { 00:14:28.853 "blocks": 20480, 00:14:28.853 "percent": 32 00:14:28.853 } 00:14:28.853 }, 00:14:28.853 "base_bdevs_list": [ 00:14:28.853 { 00:14:28.853 "name": "spare", 00:14:28.853 "uuid": "9e5cf66b-9d41-5b6a-8f74-e964834525c7", 00:14:28.853 "is_configured": true, 00:14:28.853 "data_offset": 2048, 00:14:28.853 "data_size": 63488 00:14:28.853 }, 00:14:28.853 { 00:14:28.853 "name": null, 00:14:28.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.853 "is_configured": false, 00:14:28.853 "data_offset": 2048, 00:14:28.853 "data_size": 63488 00:14:28.853 }, 00:14:28.853 { 00:14:28.853 "name": "BaseBdev3", 00:14:28.853 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:28.853 "is_configured": true, 00:14:28.853 "data_offset": 2048, 00:14:28.853 "data_size": 63488 00:14:28.853 }, 00:14:28.853 { 00:14:28.853 "name": "BaseBdev4", 00:14:28.853 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:28.853 "is_configured": true, 00:14:28.853 "data_offset": 2048, 00:14:28.853 "data_size": 63488 00:14:28.853 } 00:14:28.853 ] 00:14:28.853 }' 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.853 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.853 [2024-12-12 16:10:55.179100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.113 [2024-12-12 16:10:55.256593] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.113 [2024-12-12 16:10:55.256732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.113 [2024-12-12 16:10:55.256780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.113 [2024-12-12 16:10:55.256819] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.113 "name": "raid_bdev1", 00:14:29.113 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:29.113 "strip_size_kb": 0, 00:14:29.113 "state": "online", 00:14:29.113 "raid_level": "raid1", 00:14:29.113 "superblock": true, 00:14:29.113 "num_base_bdevs": 4, 00:14:29.113 "num_base_bdevs_discovered": 2, 00:14:29.113 "num_base_bdevs_operational": 2, 00:14:29.113 "base_bdevs_list": [ 00:14:29.113 { 00:14:29.113 "name": null, 00:14:29.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.113 "is_configured": false, 00:14:29.113 "data_offset": 0, 00:14:29.113 "data_size": 63488 00:14:29.113 }, 00:14:29.113 { 00:14:29.113 "name": null, 00:14:29.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.113 "is_configured": false, 00:14:29.113 "data_offset": 2048, 00:14:29.113 "data_size": 63488 00:14:29.113 }, 00:14:29.113 { 00:14:29.113 "name": "BaseBdev3", 00:14:29.113 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:29.113 "is_configured": true, 00:14:29.113 "data_offset": 2048, 00:14:29.113 "data_size": 63488 00:14:29.113 }, 00:14:29.113 { 00:14:29.113 "name": "BaseBdev4", 00:14:29.113 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:29.113 "is_configured": true, 00:14:29.113 "data_offset": 2048, 00:14:29.113 "data_size": 63488 00:14:29.113 } 00:14:29.113 ] 00:14:29.113 }' 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.113 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.373 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.633 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.633 "name": "raid_bdev1", 00:14:29.633 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:29.633 "strip_size_kb": 0, 00:14:29.633 "state": "online", 00:14:29.633 "raid_level": "raid1", 00:14:29.633 "superblock": true, 00:14:29.633 "num_base_bdevs": 4, 00:14:29.633 "num_base_bdevs_discovered": 2, 00:14:29.633 "num_base_bdevs_operational": 2, 00:14:29.633 "base_bdevs_list": [ 00:14:29.633 { 00:14:29.633 "name": null, 00:14:29.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.633 "is_configured": false, 00:14:29.633 "data_offset": 0, 00:14:29.633 "data_size": 63488 00:14:29.634 }, 00:14:29.634 { 00:14:29.634 "name": null, 00:14:29.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.634 "is_configured": false, 00:14:29.634 "data_offset": 2048, 00:14:29.634 "data_size": 63488 00:14:29.634 }, 00:14:29.634 { 00:14:29.634 "name": "BaseBdev3", 00:14:29.634 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:29.634 "is_configured": true, 00:14:29.634 "data_offset": 2048, 00:14:29.634 "data_size": 63488 00:14:29.634 }, 00:14:29.634 { 00:14:29.634 "name": "BaseBdev4", 00:14:29.634 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:29.634 "is_configured": true, 00:14:29.634 "data_offset": 2048, 00:14:29.634 "data_size": 63488 00:14:29.634 } 00:14:29.634 ] 00:14:29.634 }' 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.634 [2024-12-12 16:10:55.843139] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.634 [2024-12-12 16:10:55.843262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.634 [2024-12-12 16:10:55.843290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:29.634 [2024-12-12 16:10:55.843304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.634 [2024-12-12 16:10:55.843873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.634 [2024-12-12 16:10:55.843901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.634 [2024-12-12 16:10:55.844015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:29.634 [2024-12-12 16:10:55.844036] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:29.634 [2024-12-12 16:10:55.844046] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:29.634 [2024-12-12 16:10:55.844077] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:29.634 BaseBdev1 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.634 16:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.573 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.574 "name": "raid_bdev1", 00:14:30.574 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:30.574 "strip_size_kb": 0, 00:14:30.574 "state": "online", 00:14:30.574 "raid_level": "raid1", 00:14:30.574 "superblock": true, 00:14:30.574 "num_base_bdevs": 4, 00:14:30.574 "num_base_bdevs_discovered": 2, 00:14:30.574 "num_base_bdevs_operational": 2, 00:14:30.574 "base_bdevs_list": [ 00:14:30.574 { 00:14:30.574 "name": null, 00:14:30.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.574 "is_configured": false, 00:14:30.574 "data_offset": 0, 00:14:30.574 "data_size": 63488 00:14:30.574 }, 00:14:30.574 { 00:14:30.574 "name": null, 00:14:30.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.574 "is_configured": false, 00:14:30.574 "data_offset": 2048, 00:14:30.574 "data_size": 63488 00:14:30.574 }, 00:14:30.574 { 00:14:30.574 "name": "BaseBdev3", 00:14:30.574 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:30.574 "is_configured": true, 00:14:30.574 "data_offset": 2048, 00:14:30.574 "data_size": 63488 00:14:30.574 }, 00:14:30.574 { 00:14:30.574 "name": "BaseBdev4", 00:14:30.574 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:30.574 "is_configured": true, 00:14:30.574 "data_offset": 2048, 00:14:30.574 "data_size": 63488 00:14:30.574 } 00:14:30.574 ] 00:14:30.574 }' 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.574 16:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.143 "name": "raid_bdev1", 00:14:31.143 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:31.143 "strip_size_kb": 0, 00:14:31.143 "state": "online", 00:14:31.143 "raid_level": "raid1", 00:14:31.143 "superblock": true, 00:14:31.143 "num_base_bdevs": 4, 00:14:31.143 "num_base_bdevs_discovered": 2, 00:14:31.143 "num_base_bdevs_operational": 2, 00:14:31.143 "base_bdevs_list": [ 00:14:31.143 { 00:14:31.143 "name": null, 00:14:31.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.143 "is_configured": false, 00:14:31.143 "data_offset": 0, 00:14:31.143 "data_size": 63488 00:14:31.143 }, 00:14:31.143 { 00:14:31.143 "name": null, 00:14:31.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.143 "is_configured": false, 00:14:31.143 "data_offset": 2048, 00:14:31.143 "data_size": 63488 00:14:31.143 }, 00:14:31.143 { 00:14:31.143 "name": "BaseBdev3", 00:14:31.143 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:31.143 "is_configured": true, 00:14:31.143 "data_offset": 2048, 00:14:31.143 "data_size": 63488 00:14:31.143 }, 00:14:31.143 { 00:14:31.143 "name": "BaseBdev4", 00:14:31.143 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:31.143 "is_configured": true, 00:14:31.143 "data_offset": 2048, 00:14:31.143 "data_size": 63488 00:14:31.143 } 00:14:31.143 ] 00:14:31.143 }' 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:31.143 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.144 [2024-12-12 16:10:57.468528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.144 [2024-12-12 16:10:57.468871] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:31.144 [2024-12-12 16:10:57.468960] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.144 request: 00:14:31.144 { 00:14:31.144 "base_bdev": "BaseBdev1", 00:14:31.144 "raid_bdev": "raid_bdev1", 00:14:31.144 "method": "bdev_raid_add_base_bdev", 00:14:31.144 "req_id": 1 00:14:31.144 } 00:14:31.144 Got JSON-RPC error response 00:14:31.144 response: 00:14:31.144 { 00:14:31.144 "code": -22, 00:14:31.144 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:31.144 } 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.144 16:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.525 "name": "raid_bdev1", 00:14:32.525 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:32.525 "strip_size_kb": 0, 00:14:32.525 "state": "online", 00:14:32.525 "raid_level": "raid1", 00:14:32.525 "superblock": true, 00:14:32.525 "num_base_bdevs": 4, 00:14:32.525 "num_base_bdevs_discovered": 2, 00:14:32.525 "num_base_bdevs_operational": 2, 00:14:32.525 "base_bdevs_list": [ 00:14:32.525 { 00:14:32.525 "name": null, 00:14:32.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.525 "is_configured": false, 00:14:32.525 "data_offset": 0, 00:14:32.525 "data_size": 63488 00:14:32.525 }, 00:14:32.525 { 00:14:32.525 "name": null, 00:14:32.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.525 "is_configured": false, 00:14:32.525 "data_offset": 2048, 00:14:32.525 "data_size": 63488 00:14:32.525 }, 00:14:32.525 { 00:14:32.525 "name": "BaseBdev3", 00:14:32.525 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:32.525 "is_configured": true, 00:14:32.525 "data_offset": 2048, 00:14:32.525 "data_size": 63488 00:14:32.525 }, 00:14:32.525 { 00:14:32.525 "name": "BaseBdev4", 00:14:32.525 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:32.525 "is_configured": true, 00:14:32.525 "data_offset": 2048, 00:14:32.525 "data_size": 63488 00:14:32.525 } 00:14:32.525 ] 00:14:32.525 }' 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.525 16:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.785 "name": "raid_bdev1", 00:14:32.785 "uuid": "918d8ca9-1923-42b5-8efb-e43f9d41e172", 00:14:32.785 "strip_size_kb": 0, 00:14:32.785 "state": "online", 00:14:32.785 "raid_level": "raid1", 00:14:32.785 "superblock": true, 00:14:32.785 "num_base_bdevs": 4, 00:14:32.785 "num_base_bdevs_discovered": 2, 00:14:32.785 "num_base_bdevs_operational": 2, 00:14:32.785 "base_bdevs_list": [ 00:14:32.785 { 00:14:32.785 "name": null, 00:14:32.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.785 "is_configured": false, 00:14:32.785 "data_offset": 0, 00:14:32.785 "data_size": 63488 00:14:32.785 }, 00:14:32.785 { 00:14:32.785 "name": null, 00:14:32.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.785 "is_configured": false, 00:14:32.785 "data_offset": 2048, 00:14:32.785 "data_size": 63488 00:14:32.785 }, 00:14:32.785 { 00:14:32.785 "name": "BaseBdev3", 00:14:32.785 "uuid": "771fd704-1438-5cc0-88a1-fb8d2d06a99e", 00:14:32.785 "is_configured": true, 00:14:32.785 "data_offset": 2048, 00:14:32.785 "data_size": 63488 00:14:32.785 }, 00:14:32.785 { 00:14:32.785 "name": "BaseBdev4", 00:14:32.785 "uuid": "19d89124-f1dc-571a-a867-310fdafaadfd", 00:14:32.785 "is_configured": true, 00:14:32.785 "data_offset": 2048, 00:14:32.785 "data_size": 63488 00:14:32.785 } 00:14:32.785 ] 00:14:32.785 }' 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.785 16:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.785 16:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.785 16:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 80079 00:14:32.785 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80079 ']' 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 80079 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80079 00:14:32.786 killing process with pid 80079 00:14:32.786 Received shutdown signal, test time was about 60.000000 seconds 00:14:32.786 00:14:32.786 Latency(us) 00:14:32.786 [2024-12-12T16:10:59.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.786 [2024-12-12T16:10:59.138Z] =================================================================================================================== 00:14:32.786 [2024-12-12T16:10:59.138Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80079' 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 80079 00:14:32.786 [2024-12-12 16:10:59.074910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.786 [2024-12-12 16:10:59.075055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.786 16:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 80079 00:14:32.786 [2024-12-12 16:10:59.075139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.786 [2024-12-12 16:10:59.075150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:33.355 [2024-12-12 16:10:59.597578] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.738 00:14:34.738 real 0m25.388s 00:14:34.738 user 0m29.874s 00:14:34.738 sys 0m3.961s 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.738 ************************************ 00:14:34.738 END TEST raid_rebuild_test_sb 00:14:34.738 ************************************ 00:14:34.738 16:11:00 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:34.738 16:11:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:34.738 16:11:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.738 16:11:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.738 ************************************ 00:14:34.738 START TEST raid_rebuild_test_io 00:14:34.738 ************************************ 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80835 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80835 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 80835 ']' 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.738 16:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.738 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:34.738 Zero copy mechanism will not be used. 00:14:34.738 [2024-12-12 16:11:00.975096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:34.738 [2024-12-12 16:11:00.975204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80835 ] 00:14:34.998 [2024-12-12 16:11:01.127207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.998 [2024-12-12 16:11:01.255202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.265 [2024-12-12 16:11:01.489887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.265 [2024-12-12 16:11:01.489991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.533 BaseBdev1_malloc 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.533 [2024-12-12 16:11:01.861990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:35.533 [2024-12-12 16:11:01.862076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.533 [2024-12-12 16:11:01.862104] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:35.533 [2024-12-12 16:11:01.862119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.533 [2024-12-12 16:11:01.864542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.533 [2024-12-12 16:11:01.864590] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.533 BaseBdev1 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.533 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.793 BaseBdev2_malloc 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.793 [2024-12-12 16:11:01.923535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:35.793 [2024-12-12 16:11:01.923631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.793 [2024-12-12 16:11:01.923657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:35.793 [2024-12-12 16:11:01.923673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.793 [2024-12-12 16:11:01.926033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.793 [2024-12-12 16:11:01.926074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.793 BaseBdev2 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.793 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.793 BaseBdev3_malloc 00:14:35.794 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.794 16:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:35.794 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.794 16:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.794 [2024-12-12 16:11:02.000668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:35.794 [2024-12-12 16:11:02.000743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.794 [2024-12-12 16:11:02.000783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:35.794 [2024-12-12 16:11:02.000798] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.794 [2024-12-12 16:11:02.003365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.794 [2024-12-12 16:11:02.003412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:35.794 BaseBdev3 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.794 BaseBdev4_malloc 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.794 [2024-12-12 16:11:02.062448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:35.794 [2024-12-12 16:11:02.062527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.794 [2024-12-12 16:11:02.062553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:35.794 [2024-12-12 16:11:02.062567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.794 [2024-12-12 16:11:02.064950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.794 [2024-12-12 16:11:02.064992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:35.794 BaseBdev4 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.794 spare_malloc 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.794 spare_delay 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.794 [2024-12-12 16:11:02.134713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.794 [2024-12-12 16:11:02.134773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.794 [2024-12-12 16:11:02.134793] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:35.794 [2024-12-12 16:11:02.134810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.794 [2024-12-12 16:11:02.137156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.794 [2024-12-12 16:11:02.137199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.794 spare 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.794 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.054 [2024-12-12 16:11:02.146742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.054 [2024-12-12 16:11:02.148755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.054 [2024-12-12 16:11:02.148832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.054 [2024-12-12 16:11:02.148913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:36.054 [2024-12-12 16:11:02.149025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:36.054 [2024-12-12 16:11:02.149050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:36.054 [2024-12-12 16:11:02.149300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.054 [2024-12-12 16:11:02.149487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:36.054 [2024-12-12 16:11:02.149507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:36.054 [2024-12-12 16:11:02.149670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.054 "name": "raid_bdev1", 00:14:36.054 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:36.054 "strip_size_kb": 0, 00:14:36.054 "state": "online", 00:14:36.054 "raid_level": "raid1", 00:14:36.054 "superblock": false, 00:14:36.054 "num_base_bdevs": 4, 00:14:36.054 "num_base_bdevs_discovered": 4, 00:14:36.054 "num_base_bdevs_operational": 4, 00:14:36.054 "base_bdevs_list": [ 00:14:36.054 { 00:14:36.054 "name": "BaseBdev1", 00:14:36.054 "uuid": "7442185a-7c87-57d2-91a7-b8a3e31ddd7d", 00:14:36.054 "is_configured": true, 00:14:36.054 "data_offset": 0, 00:14:36.054 "data_size": 65536 00:14:36.054 }, 00:14:36.054 { 00:14:36.054 "name": "BaseBdev2", 00:14:36.054 "uuid": "ce6a046a-5f9c-5799-9bd3-fe4bde51badb", 00:14:36.054 "is_configured": true, 00:14:36.054 "data_offset": 0, 00:14:36.054 "data_size": 65536 00:14:36.054 }, 00:14:36.054 { 00:14:36.054 "name": "BaseBdev3", 00:14:36.054 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:36.054 "is_configured": true, 00:14:36.054 "data_offset": 0, 00:14:36.054 "data_size": 65536 00:14:36.054 }, 00:14:36.054 { 00:14:36.054 "name": "BaseBdev4", 00:14:36.054 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:36.054 "is_configured": true, 00:14:36.054 "data_offset": 0, 00:14:36.054 "data_size": 65536 00:14:36.054 } 00:14:36.054 ] 00:14:36.054 }' 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.054 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.314 [2024-12-12 16:11:02.550336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.314 [2024-12-12 16:11:02.617864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.314 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.314 "name": "raid_bdev1", 00:14:36.314 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:36.314 "strip_size_kb": 0, 00:14:36.314 "state": "online", 00:14:36.314 "raid_level": "raid1", 00:14:36.314 "superblock": false, 00:14:36.314 "num_base_bdevs": 4, 00:14:36.314 "num_base_bdevs_discovered": 3, 00:14:36.314 "num_base_bdevs_operational": 3, 00:14:36.314 "base_bdevs_list": [ 00:14:36.314 { 00:14:36.314 "name": null, 00:14:36.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.314 "is_configured": false, 00:14:36.314 "data_offset": 0, 00:14:36.314 "data_size": 65536 00:14:36.314 }, 00:14:36.314 { 00:14:36.314 "name": "BaseBdev2", 00:14:36.314 "uuid": "ce6a046a-5f9c-5799-9bd3-fe4bde51badb", 00:14:36.314 "is_configured": true, 00:14:36.314 "data_offset": 0, 00:14:36.314 "data_size": 65536 00:14:36.314 }, 00:14:36.314 { 00:14:36.314 "name": "BaseBdev3", 00:14:36.314 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:36.314 "is_configured": true, 00:14:36.314 "data_offset": 0, 00:14:36.314 "data_size": 65536 00:14:36.314 }, 00:14:36.315 { 00:14:36.315 "name": "BaseBdev4", 00:14:36.315 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:36.315 "is_configured": true, 00:14:36.315 "data_offset": 0, 00:14:36.315 "data_size": 65536 00:14:36.315 } 00:14:36.315 ] 00:14:36.315 }' 00:14:36.315 16:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.315 16:11:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.574 [2024-12-12 16:11:02.715190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:36.574 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.574 Zero copy mechanism will not be used. 00:14:36.574 Running I/O for 60 seconds... 00:14:36.834 16:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.834 16:11:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.834 16:11:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.834 [2024-12-12 16:11:03.023510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.834 16:11:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.834 16:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:36.834 [2024-12-12 16:11:03.093939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:36.834 [2024-12-12 16:11:03.096220] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.094 [2024-12-12 16:11:03.386220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:37.094 [2024-12-12 16:11:03.386759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:37.665 159.00 IOPS, 477.00 MiB/s [2024-12-12T16:11:04.017Z] [2024-12-12 16:11:03.723703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:37.665 [2024-12-12 16:11:03.724246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:37.665 [2024-12-12 16:11:03.866068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:37.665 [2024-12-12 16:11:03.866890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.925 "name": "raid_bdev1", 00:14:37.925 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:37.925 "strip_size_kb": 0, 00:14:37.925 "state": "online", 00:14:37.925 "raid_level": "raid1", 00:14:37.925 "superblock": false, 00:14:37.925 "num_base_bdevs": 4, 00:14:37.925 "num_base_bdevs_discovered": 4, 00:14:37.925 "num_base_bdevs_operational": 4, 00:14:37.925 "process": { 00:14:37.925 "type": "rebuild", 00:14:37.925 "target": "spare", 00:14:37.925 "progress": { 00:14:37.925 "blocks": 10240, 00:14:37.925 "percent": 15 00:14:37.925 } 00:14:37.925 }, 00:14:37.925 "base_bdevs_list": [ 00:14:37.925 { 00:14:37.925 "name": "spare", 00:14:37.925 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:37.925 "is_configured": true, 00:14:37.925 "data_offset": 0, 00:14:37.925 "data_size": 65536 00:14:37.925 }, 00:14:37.925 { 00:14:37.925 "name": "BaseBdev2", 00:14:37.925 "uuid": "ce6a046a-5f9c-5799-9bd3-fe4bde51badb", 00:14:37.925 "is_configured": true, 00:14:37.925 "data_offset": 0, 00:14:37.925 "data_size": 65536 00:14:37.925 }, 00:14:37.925 { 00:14:37.925 "name": "BaseBdev3", 00:14:37.925 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:37.925 "is_configured": true, 00:14:37.925 "data_offset": 0, 00:14:37.925 "data_size": 65536 00:14:37.925 }, 00:14:37.925 { 00:14:37.925 "name": "BaseBdev4", 00:14:37.925 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:37.925 "is_configured": true, 00:14:37.925 "data_offset": 0, 00:14:37.925 "data_size": 65536 00:14:37.925 } 00:14:37.925 ] 00:14:37.925 }' 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.925 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.925 [2024-12-12 16:11:04.235188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.184 [2024-12-12 16:11:04.432735] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.184 [2024-12-12 16:11:04.443408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.184 [2024-12-12 16:11:04.443499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.184 [2024-12-12 16:11:04.443517] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.184 [2024-12-12 16:11:04.471068] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.184 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.185 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.185 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.185 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.185 "name": "raid_bdev1", 00:14:38.185 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:38.185 "strip_size_kb": 0, 00:14:38.185 "state": "online", 00:14:38.185 "raid_level": "raid1", 00:14:38.185 "superblock": false, 00:14:38.185 "num_base_bdevs": 4, 00:14:38.185 "num_base_bdevs_discovered": 3, 00:14:38.185 "num_base_bdevs_operational": 3, 00:14:38.185 "base_bdevs_list": [ 00:14:38.185 { 00:14:38.185 "name": null, 00:14:38.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.185 "is_configured": false, 00:14:38.185 "data_offset": 0, 00:14:38.185 "data_size": 65536 00:14:38.185 }, 00:14:38.185 { 00:14:38.185 "name": "BaseBdev2", 00:14:38.185 "uuid": "ce6a046a-5f9c-5799-9bd3-fe4bde51badb", 00:14:38.185 "is_configured": true, 00:14:38.185 "data_offset": 0, 00:14:38.185 "data_size": 65536 00:14:38.185 }, 00:14:38.185 { 00:14:38.185 "name": "BaseBdev3", 00:14:38.185 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:38.185 "is_configured": true, 00:14:38.185 "data_offset": 0, 00:14:38.185 "data_size": 65536 00:14:38.185 }, 00:14:38.185 { 00:14:38.185 "name": "BaseBdev4", 00:14:38.185 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:38.185 "is_configured": true, 00:14:38.185 "data_offset": 0, 00:14:38.185 "data_size": 65536 00:14:38.185 } 00:14:38.185 ] 00:14:38.185 }' 00:14:38.185 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.185 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.701 141.00 IOPS, 423.00 MiB/s [2024-12-12T16:11:05.053Z] 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.701 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.701 "name": "raid_bdev1", 00:14:38.701 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:38.701 "strip_size_kb": 0, 00:14:38.701 "state": "online", 00:14:38.701 "raid_level": "raid1", 00:14:38.701 "superblock": false, 00:14:38.701 "num_base_bdevs": 4, 00:14:38.701 "num_base_bdevs_discovered": 3, 00:14:38.701 "num_base_bdevs_operational": 3, 00:14:38.701 "base_bdevs_list": [ 00:14:38.701 { 00:14:38.701 "name": null, 00:14:38.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.701 "is_configured": false, 00:14:38.701 "data_offset": 0, 00:14:38.701 "data_size": 65536 00:14:38.701 }, 00:14:38.701 { 00:14:38.701 "name": "BaseBdev2", 00:14:38.701 "uuid": "ce6a046a-5f9c-5799-9bd3-fe4bde51badb", 00:14:38.701 "is_configured": true, 00:14:38.701 "data_offset": 0, 00:14:38.701 "data_size": 65536 00:14:38.701 }, 00:14:38.701 { 00:14:38.701 "name": "BaseBdev3", 00:14:38.701 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:38.701 "is_configured": true, 00:14:38.701 "data_offset": 0, 00:14:38.701 "data_size": 65536 00:14:38.701 }, 00:14:38.701 { 00:14:38.701 "name": "BaseBdev4", 00:14:38.701 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:38.701 "is_configured": true, 00:14:38.702 "data_offset": 0, 00:14:38.702 "data_size": 65536 00:14:38.702 } 00:14:38.702 ] 00:14:38.702 }' 00:14:38.702 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.702 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.702 16:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.702 16:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.702 16:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.702 16:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.702 16:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.702 [2024-12-12 16:11:05.039959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.960 16:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.960 16:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:38.960 [2024-12-12 16:11:05.130592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:38.960 [2024-12-12 16:11:05.132959] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.960 [2024-12-12 16:11:05.253228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.960 [2024-12-12 16:11:05.255793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:39.218 [2024-12-12 16:11:05.469872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.218 [2024-12-12 16:11:05.471181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.477 143.00 IOPS, 429.00 MiB/s [2024-12-12T16:11:05.829Z] [2024-12-12 16:11:05.796040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:39.477 [2024-12-12 16:11:05.797055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:39.735 [2024-12-12 16:11:06.020993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.995 "name": "raid_bdev1", 00:14:39.995 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:39.995 "strip_size_kb": 0, 00:14:39.995 "state": "online", 00:14:39.995 "raid_level": "raid1", 00:14:39.995 "superblock": false, 00:14:39.995 "num_base_bdevs": 4, 00:14:39.995 "num_base_bdevs_discovered": 4, 00:14:39.995 "num_base_bdevs_operational": 4, 00:14:39.995 "process": { 00:14:39.995 "type": "rebuild", 00:14:39.995 "target": "spare", 00:14:39.995 "progress": { 00:14:39.995 "blocks": 10240, 00:14:39.995 "percent": 15 00:14:39.995 } 00:14:39.995 }, 00:14:39.995 "base_bdevs_list": [ 00:14:39.995 { 00:14:39.995 "name": "spare", 00:14:39.995 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:39.995 "is_configured": true, 00:14:39.995 "data_offset": 0, 00:14:39.995 "data_size": 65536 00:14:39.995 }, 00:14:39.995 { 00:14:39.995 "name": "BaseBdev2", 00:14:39.995 "uuid": "ce6a046a-5f9c-5799-9bd3-fe4bde51badb", 00:14:39.995 "is_configured": true, 00:14:39.995 "data_offset": 0, 00:14:39.995 "data_size": 65536 00:14:39.995 }, 00:14:39.995 { 00:14:39.995 "name": "BaseBdev3", 00:14:39.995 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:39.995 "is_configured": true, 00:14:39.995 "data_offset": 0, 00:14:39.995 "data_size": 65536 00:14:39.995 }, 00:14:39.995 { 00:14:39.995 "name": "BaseBdev4", 00:14:39.995 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:39.995 "is_configured": true, 00:14:39.995 "data_offset": 0, 00:14:39.995 "data_size": 65536 00:14:39.995 } 00:14:39.995 ] 00:14:39.995 }' 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.995 [2024-12-12 16:11:06.234064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.995 [2024-12-12 16:11:06.315951] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:39.995 [2024-12-12 16:11:06.316004] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.995 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.996 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.996 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.996 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.255 "name": "raid_bdev1", 00:14:40.255 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:40.255 "strip_size_kb": 0, 00:14:40.255 "state": "online", 00:14:40.255 "raid_level": "raid1", 00:14:40.255 "superblock": false, 00:14:40.255 "num_base_bdevs": 4, 00:14:40.255 "num_base_bdevs_discovered": 3, 00:14:40.255 "num_base_bdevs_operational": 3, 00:14:40.255 "process": { 00:14:40.255 "type": "rebuild", 00:14:40.255 "target": "spare", 00:14:40.255 "progress": { 00:14:40.255 "blocks": 14336, 00:14:40.255 "percent": 21 00:14:40.255 } 00:14:40.255 }, 00:14:40.255 "base_bdevs_list": [ 00:14:40.255 { 00:14:40.255 "name": "spare", 00:14:40.255 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:40.255 "is_configured": true, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 }, 00:14:40.255 { 00:14:40.255 "name": null, 00:14:40.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.255 "is_configured": false, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 }, 00:14:40.255 { 00:14:40.255 "name": "BaseBdev3", 00:14:40.255 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:40.255 "is_configured": true, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 }, 00:14:40.255 { 00:14:40.255 "name": "BaseBdev4", 00:14:40.255 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:40.255 "is_configured": true, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 } 00:14:40.255 ] 00:14:40.255 }' 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.255 [2024-12-12 16:11:06.455392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.255 "name": "raid_bdev1", 00:14:40.255 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:40.255 "strip_size_kb": 0, 00:14:40.255 "state": "online", 00:14:40.255 "raid_level": "raid1", 00:14:40.255 "superblock": false, 00:14:40.255 "num_base_bdevs": 4, 00:14:40.255 "num_base_bdevs_discovered": 3, 00:14:40.255 "num_base_bdevs_operational": 3, 00:14:40.255 "process": { 00:14:40.255 "type": "rebuild", 00:14:40.255 "target": "spare", 00:14:40.255 "progress": { 00:14:40.255 "blocks": 16384, 00:14:40.255 "percent": 25 00:14:40.255 } 00:14:40.255 }, 00:14:40.255 "base_bdevs_list": [ 00:14:40.255 { 00:14:40.255 "name": "spare", 00:14:40.255 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:40.255 "is_configured": true, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 }, 00:14:40.255 { 00:14:40.255 "name": null, 00:14:40.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.255 "is_configured": false, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 }, 00:14:40.255 { 00:14:40.255 "name": "BaseBdev3", 00:14:40.255 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:40.255 "is_configured": true, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 }, 00:14:40.255 { 00:14:40.255 "name": "BaseBdev4", 00:14:40.255 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:40.255 "is_configured": true, 00:14:40.255 "data_offset": 0, 00:14:40.255 "data_size": 65536 00:14:40.255 } 00:14:40.255 ] 00:14:40.255 }' 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.255 16:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.514 123.00 IOPS, 369.00 MiB/s [2024-12-12T16:11:06.866Z] [2024-12-12 16:11:06.816677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:40.772 [2024-12-12 16:11:06.929237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:41.031 [2024-12-12 16:11:07.247345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:41.031 [2024-12-12 16:11:07.249165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.289 "name": "raid_bdev1", 00:14:41.289 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:41.289 "strip_size_kb": 0, 00:14:41.289 "state": "online", 00:14:41.289 "raid_level": "raid1", 00:14:41.289 "superblock": false, 00:14:41.289 "num_base_bdevs": 4, 00:14:41.289 "num_base_bdevs_discovered": 3, 00:14:41.289 "num_base_bdevs_operational": 3, 00:14:41.289 "process": { 00:14:41.289 "type": "rebuild", 00:14:41.289 "target": "spare", 00:14:41.289 "progress": { 00:14:41.289 "blocks": 28672, 00:14:41.289 "percent": 43 00:14:41.289 } 00:14:41.289 }, 00:14:41.289 "base_bdevs_list": [ 00:14:41.289 { 00:14:41.289 "name": "spare", 00:14:41.289 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:41.289 "is_configured": true, 00:14:41.289 "data_offset": 0, 00:14:41.289 "data_size": 65536 00:14:41.289 }, 00:14:41.289 { 00:14:41.289 "name": null, 00:14:41.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.289 "is_configured": false, 00:14:41.289 "data_offset": 0, 00:14:41.289 "data_size": 65536 00:14:41.289 }, 00:14:41.289 { 00:14:41.289 "name": "BaseBdev3", 00:14:41.289 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:41.289 "is_configured": true, 00:14:41.289 "data_offset": 0, 00:14:41.289 "data_size": 65536 00:14:41.289 }, 00:14:41.289 { 00:14:41.289 "name": "BaseBdev4", 00:14:41.289 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:41.289 "is_configured": true, 00:14:41.289 "data_offset": 0, 00:14:41.289 "data_size": 65536 00:14:41.289 } 00:14:41.289 ] 00:14:41.289 }' 00:14:41.289 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.548 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.548 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.548 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.548 16:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.548 107.20 IOPS, 321.60 MiB/s [2024-12-12T16:11:07.900Z] [2024-12-12 16:11:07.842414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:42.117 [2024-12-12 16:11:08.453802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.375 16:11:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.634 96.00 IOPS, 288.00 MiB/s [2024-12-12T16:11:08.986Z] 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.634 "name": "raid_bdev1", 00:14:42.634 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:42.634 "strip_size_kb": 0, 00:14:42.634 "state": "online", 00:14:42.634 "raid_level": "raid1", 00:14:42.634 "superblock": false, 00:14:42.634 "num_base_bdevs": 4, 00:14:42.634 "num_base_bdevs_discovered": 3, 00:14:42.634 "num_base_bdevs_operational": 3, 00:14:42.634 "process": { 00:14:42.634 "type": "rebuild", 00:14:42.634 "target": "spare", 00:14:42.634 "progress": { 00:14:42.634 "blocks": 49152, 00:14:42.634 "percent": 75 00:14:42.634 } 00:14:42.634 }, 00:14:42.634 "base_bdevs_list": [ 00:14:42.634 { 00:14:42.634 "name": "spare", 00:14:42.634 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:42.634 "is_configured": true, 00:14:42.634 "data_offset": 0, 00:14:42.634 "data_size": 65536 00:14:42.634 }, 00:14:42.634 { 00:14:42.634 "name": null, 00:14:42.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.634 "is_configured": false, 00:14:42.634 "data_offset": 0, 00:14:42.634 "data_size": 65536 00:14:42.634 }, 00:14:42.634 { 00:14:42.634 "name": "BaseBdev3", 00:14:42.634 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:42.634 "is_configured": true, 00:14:42.634 "data_offset": 0, 00:14:42.634 "data_size": 65536 00:14:42.634 }, 00:14:42.634 { 00:14:42.634 "name": "BaseBdev4", 00:14:42.634 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:42.634 "is_configured": true, 00:14:42.634 "data_offset": 0, 00:14:42.634 "data_size": 65536 00:14:42.634 } 00:14:42.634 ] 00:14:42.634 }' 00:14:42.634 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.634 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.634 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.634 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.634 16:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.634 [2024-12-12 16:11:08.900173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:43.569 [2024-12-12 16:11:09.588325] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:43.570 [2024-12-12 16:11:09.693803] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:43.570 [2024-12-12 16:11:09.699445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.570 88.43 IOPS, 265.29 MiB/s [2024-12-12T16:11:09.922Z] 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.570 "name": "raid_bdev1", 00:14:43.570 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:43.570 "strip_size_kb": 0, 00:14:43.570 "state": "online", 00:14:43.570 "raid_level": "raid1", 00:14:43.570 "superblock": false, 00:14:43.570 "num_base_bdevs": 4, 00:14:43.570 "num_base_bdevs_discovered": 3, 00:14:43.570 "num_base_bdevs_operational": 3, 00:14:43.570 "base_bdevs_list": [ 00:14:43.570 { 00:14:43.570 "name": "spare", 00:14:43.570 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:43.570 "is_configured": true, 00:14:43.570 "data_offset": 0, 00:14:43.570 "data_size": 65536 00:14:43.570 }, 00:14:43.570 { 00:14:43.570 "name": null, 00:14:43.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.570 "is_configured": false, 00:14:43.570 "data_offset": 0, 00:14:43.570 "data_size": 65536 00:14:43.570 }, 00:14:43.570 { 00:14:43.570 "name": "BaseBdev3", 00:14:43.570 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:43.570 "is_configured": true, 00:14:43.570 "data_offset": 0, 00:14:43.570 "data_size": 65536 00:14:43.570 }, 00:14:43.570 { 00:14:43.570 "name": "BaseBdev4", 00:14:43.570 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:43.570 "is_configured": true, 00:14:43.570 "data_offset": 0, 00:14:43.570 "data_size": 65536 00:14:43.570 } 00:14:43.570 ] 00:14:43.570 }' 00:14:43.570 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.829 "name": "raid_bdev1", 00:14:43.829 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:43.829 "strip_size_kb": 0, 00:14:43.829 "state": "online", 00:14:43.829 "raid_level": "raid1", 00:14:43.829 "superblock": false, 00:14:43.829 "num_base_bdevs": 4, 00:14:43.829 "num_base_bdevs_discovered": 3, 00:14:43.829 "num_base_bdevs_operational": 3, 00:14:43.829 "base_bdevs_list": [ 00:14:43.829 { 00:14:43.829 "name": "spare", 00:14:43.829 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:43.829 "is_configured": true, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 }, 00:14:43.829 { 00:14:43.829 "name": null, 00:14:43.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.829 "is_configured": false, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 }, 00:14:43.829 { 00:14:43.829 "name": "BaseBdev3", 00:14:43.829 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:43.829 "is_configured": true, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 }, 00:14:43.829 { 00:14:43.829 "name": "BaseBdev4", 00:14:43.829 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:43.829 "is_configured": true, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 } 00:14:43.829 ] 00:14:43.829 }' 00:14:43.829 16:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.829 "name": "raid_bdev1", 00:14:43.829 "uuid": "0194df06-656e-487a-ba0c-57a0e81f1283", 00:14:43.829 "strip_size_kb": 0, 00:14:43.829 "state": "online", 00:14:43.829 "raid_level": "raid1", 00:14:43.829 "superblock": false, 00:14:43.829 "num_base_bdevs": 4, 00:14:43.829 "num_base_bdevs_discovered": 3, 00:14:43.829 "num_base_bdevs_operational": 3, 00:14:43.829 "base_bdevs_list": [ 00:14:43.829 { 00:14:43.829 "name": "spare", 00:14:43.829 "uuid": "2e37a8f2-a2a4-5890-bf60-a000cc2a5f91", 00:14:43.829 "is_configured": true, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 }, 00:14:43.829 { 00:14:43.829 "name": null, 00:14:43.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.829 "is_configured": false, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 }, 00:14:43.829 { 00:14:43.829 "name": "BaseBdev3", 00:14:43.829 "uuid": "a4b8c389-484e-543d-bff7-b56ac39feb72", 00:14:43.829 "is_configured": true, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 }, 00:14:43.829 { 00:14:43.829 "name": "BaseBdev4", 00:14:43.829 "uuid": "6bc014b6-95e2-5040-b758-419f9c8c5425", 00:14:43.829 "is_configured": true, 00:14:43.829 "data_offset": 0, 00:14:43.829 "data_size": 65536 00:14:43.829 } 00:14:43.829 ] 00:14:43.829 }' 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.829 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.396 [2024-12-12 16:11:10.531353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.396 [2024-12-12 16:11:10.531402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.396 00:14:44.396 Latency(us) 00:14:44.396 [2024-12-12T16:11:10.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.396 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:44.396 raid_bdev1 : 7.88 82.90 248.70 0.00 0.00 17256.71 338.05 114015.47 00:14:44.396 [2024-12-12T16:11:10.748Z] =================================================================================================================== 00:14:44.396 [2024-12-12T16:11:10.748Z] Total : 82.90 248.70 0.00 0.00 17256.71 338.05 114015.47 00:14:44.396 [2024-12-12 16:11:10.600087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.396 [2024-12-12 16:11:10.600161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.396 [2024-12-12 16:11:10.600268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.396 [2024-12-12 16:11:10.600280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:44.396 { 00:14:44.396 "results": [ 00:14:44.396 { 00:14:44.396 "job": "raid_bdev1", 00:14:44.396 "core_mask": "0x1", 00:14:44.396 "workload": "randrw", 00:14:44.396 "percentage": 50, 00:14:44.396 "status": "finished", 00:14:44.396 "queue_depth": 2, 00:14:44.396 "io_size": 3145728, 00:14:44.396 "runtime": 7.876887, 00:14:44.396 "iops": 82.90077031700467, 00:14:44.396 "mibps": 248.702310951014, 00:14:44.396 "io_failed": 0, 00:14:44.396 "io_timeout": 0, 00:14:44.396 "avg_latency_us": 17256.71147608953, 00:14:44.396 "min_latency_us": 338.05414847161575, 00:14:44.396 "max_latency_us": 114015.46899563319 00:14:44.396 } 00:14:44.396 ], 00:14:44.396 "core_count": 1 00:14:44.396 } 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.396 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:44.397 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.397 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:44.397 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.397 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.397 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:44.655 /dev/nbd0 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.655 1+0 records in 00:14:44.655 1+0 records out 00:14:44.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409844 s, 10.0 MB/s 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.655 16:11:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:44.914 /dev/nbd1 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.914 1+0 records in 00:14:44.914 1+0 records out 00:14:44.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361731 s, 11.3 MB/s 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.914 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:45.173 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:45.173 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.173 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:45.173 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.173 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.173 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.173 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.432 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:45.432 /dev/nbd1 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.691 1+0 records in 00:14:45.691 1+0 records out 00:14:45.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391671 s, 10.5 MB/s 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.691 16:11:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.950 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 80835 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 80835 ']' 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 80835 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80835 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80835' 00:14:46.209 killing process with pid 80835 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 80835 00:14:46.209 Received shutdown signal, test time was about 9.688535 seconds 00:14:46.209 00:14:46.209 Latency(us) 00:14:46.209 [2024-12-12T16:11:12.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.209 [2024-12-12T16:11:12.561Z] =================================================================================================================== 00:14:46.209 [2024-12-12T16:11:12.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.209 [2024-12-12 16:11:12.387344] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.209 16:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 80835 00:14:46.467 [2024-12-12 16:11:12.793689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.856 16:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:47.856 00:14:47.856 real 0m13.085s 00:14:47.856 user 0m16.155s 00:14:47.856 sys 0m1.923s 00:14:47.856 16:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.856 16:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.856 ************************************ 00:14:47.856 END TEST raid_rebuild_test_io 00:14:47.856 ************************************ 00:14:47.856 16:11:14 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:47.856 16:11:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:47.856 16:11:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.856 16:11:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.856 ************************************ 00:14:47.856 START TEST raid_rebuild_test_sb_io 00:14:47.856 ************************************ 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:47.856 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=81244 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 81244 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 81244 ']' 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.857 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:47.857 Zero copy mechanism will not be used. 00:14:47.857 [2024-12-12 16:11:14.131181] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:47.857 [2024-12-12 16:11:14.131316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81244 ] 00:14:48.115 [2024-12-12 16:11:14.304278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.115 [2024-12-12 16:11:14.428370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.374 [2024-12-12 16:11:14.638529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.374 [2024-12-12 16:11:14.638583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.633 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.633 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:48.633 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.633 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:48.633 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.633 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.892 BaseBdev1_malloc 00:14:48.892 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.892 16:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.892 [2024-12-12 16:11:15.007087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:48.892 [2024-12-12 16:11:15.007148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.892 [2024-12-12 16:11:15.007169] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:48.892 [2024-12-12 16:11:15.007180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.892 [2024-12-12 16:11:15.009283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.892 [2024-12-12 16:11:15.009324] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.892 BaseBdev1 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.892 BaseBdev2_malloc 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:48.892 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.893 [2024-12-12 16:11:15.061900] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:48.893 [2024-12-12 16:11:15.061988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.893 [2024-12-12 16:11:15.062008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:48.893 [2024-12-12 16:11:15.062020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.893 [2024-12-12 16:11:15.064468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.893 [2024-12-12 16:11:15.064513] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:48.893 BaseBdev2 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.893 BaseBdev3_malloc 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.893 [2024-12-12 16:11:15.130044] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:48.893 [2024-12-12 16:11:15.130101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.893 [2024-12-12 16:11:15.130121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:48.893 [2024-12-12 16:11:15.130132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.893 [2024-12-12 16:11:15.132190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.893 [2024-12-12 16:11:15.132230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:48.893 BaseBdev3 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.893 BaseBdev4_malloc 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.893 [2024-12-12 16:11:15.185402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:48.893 [2024-12-12 16:11:15.185462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.893 [2024-12-12 16:11:15.185482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:48.893 [2024-12-12 16:11:15.185492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.893 [2024-12-12 16:11:15.187468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.893 [2024-12-12 16:11:15.187508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:48.893 BaseBdev4 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.893 spare_malloc 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.893 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.153 spare_delay 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.153 [2024-12-12 16:11:15.254080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.153 [2024-12-12 16:11:15.254133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.153 [2024-12-12 16:11:15.254149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:49.153 [2024-12-12 16:11:15.254159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.153 [2024-12-12 16:11:15.256213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.153 [2024-12-12 16:11:15.256253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.153 spare 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.153 [2024-12-12 16:11:15.266099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.153 [2024-12-12 16:11:15.267935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.153 [2024-12-12 16:11:15.268018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.153 [2024-12-12 16:11:15.268071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.153 [2024-12-12 16:11:15.268268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:49.153 [2024-12-12 16:11:15.268291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.153 [2024-12-12 16:11:15.268526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:49.153 [2024-12-12 16:11:15.268719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:49.153 [2024-12-12 16:11:15.268737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:49.153 [2024-12-12 16:11:15.268883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.153 "name": "raid_bdev1", 00:14:49.153 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:49.153 "strip_size_kb": 0, 00:14:49.153 "state": "online", 00:14:49.153 "raid_level": "raid1", 00:14:49.153 "superblock": true, 00:14:49.153 "num_base_bdevs": 4, 00:14:49.153 "num_base_bdevs_discovered": 4, 00:14:49.153 "num_base_bdevs_operational": 4, 00:14:49.153 "base_bdevs_list": [ 00:14:49.153 { 00:14:49.153 "name": "BaseBdev1", 00:14:49.153 "uuid": "c2c144b5-47f6-5018-ab98-e749605c90e8", 00:14:49.153 "is_configured": true, 00:14:49.153 "data_offset": 2048, 00:14:49.153 "data_size": 63488 00:14:49.153 }, 00:14:49.153 { 00:14:49.153 "name": "BaseBdev2", 00:14:49.153 "uuid": "d6c37c29-5bc1-55c2-8b39-28646fdabc78", 00:14:49.153 "is_configured": true, 00:14:49.153 "data_offset": 2048, 00:14:49.153 "data_size": 63488 00:14:49.153 }, 00:14:49.153 { 00:14:49.153 "name": "BaseBdev3", 00:14:49.153 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:49.153 "is_configured": true, 00:14:49.153 "data_offset": 2048, 00:14:49.153 "data_size": 63488 00:14:49.153 }, 00:14:49.153 { 00:14:49.153 "name": "BaseBdev4", 00:14:49.153 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:49.153 "is_configured": true, 00:14:49.153 "data_offset": 2048, 00:14:49.153 "data_size": 63488 00:14:49.153 } 00:14:49.153 ] 00:14:49.153 }' 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.153 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.720 [2024-12-12 16:11:15.769605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.720 [2024-12-12 16:11:15.853098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.720 "name": "raid_bdev1", 00:14:49.720 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:49.720 "strip_size_kb": 0, 00:14:49.720 "state": "online", 00:14:49.720 "raid_level": "raid1", 00:14:49.720 "superblock": true, 00:14:49.720 "num_base_bdevs": 4, 00:14:49.720 "num_base_bdevs_discovered": 3, 00:14:49.720 "num_base_bdevs_operational": 3, 00:14:49.720 "base_bdevs_list": [ 00:14:49.720 { 00:14:49.720 "name": null, 00:14:49.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.720 "is_configured": false, 00:14:49.720 "data_offset": 0, 00:14:49.720 "data_size": 63488 00:14:49.720 }, 00:14:49.720 { 00:14:49.720 "name": "BaseBdev2", 00:14:49.720 "uuid": "d6c37c29-5bc1-55c2-8b39-28646fdabc78", 00:14:49.720 "is_configured": true, 00:14:49.720 "data_offset": 2048, 00:14:49.720 "data_size": 63488 00:14:49.720 }, 00:14:49.720 { 00:14:49.720 "name": "BaseBdev3", 00:14:49.720 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:49.720 "is_configured": true, 00:14:49.720 "data_offset": 2048, 00:14:49.720 "data_size": 63488 00:14:49.720 }, 00:14:49.720 { 00:14:49.720 "name": "BaseBdev4", 00:14:49.720 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:49.720 "is_configured": true, 00:14:49.720 "data_offset": 2048, 00:14:49.720 "data_size": 63488 00:14:49.720 } 00:14:49.720 ] 00:14:49.720 }' 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.720 16:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.720 [2024-12-12 16:11:15.952390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:49.720 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:49.720 Zero copy mechanism will not be used. 00:14:49.720 Running I/O for 60 seconds... 00:14:49.978 16:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.978 16:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.978 16:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.978 [2024-12-12 16:11:16.252493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.979 16:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.979 16:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:49.979 [2024-12-12 16:11:16.315240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:49.979 [2024-12-12 16:11:16.317301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.238 [2024-12-12 16:11:16.434567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.238 [2024-12-12 16:11:16.435127] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.497 [2024-12-12 16:11:16.660676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:50.497 [2024-12-12 16:11:16.661062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:50.755 [2024-12-12 16:11:16.916844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:50.755 [2024-12-12 16:11:16.918399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:51.013 168.00 IOPS, 504.00 MiB/s [2024-12-12T16:11:17.366Z] [2024-12-12 16:11:17.129896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:51.014 [2024-12-12 16:11:17.130271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.014 "name": "raid_bdev1", 00:14:51.014 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:51.014 "strip_size_kb": 0, 00:14:51.014 "state": "online", 00:14:51.014 "raid_level": "raid1", 00:14:51.014 "superblock": true, 00:14:51.014 "num_base_bdevs": 4, 00:14:51.014 "num_base_bdevs_discovered": 4, 00:14:51.014 "num_base_bdevs_operational": 4, 00:14:51.014 "process": { 00:14:51.014 "type": "rebuild", 00:14:51.014 "target": "spare", 00:14:51.014 "progress": { 00:14:51.014 "blocks": 10240, 00:14:51.014 "percent": 16 00:14:51.014 } 00:14:51.014 }, 00:14:51.014 "base_bdevs_list": [ 00:14:51.014 { 00:14:51.014 "name": "spare", 00:14:51.014 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:51.014 "is_configured": true, 00:14:51.014 "data_offset": 2048, 00:14:51.014 "data_size": 63488 00:14:51.014 }, 00:14:51.014 { 00:14:51.014 "name": "BaseBdev2", 00:14:51.014 "uuid": "d6c37c29-5bc1-55c2-8b39-28646fdabc78", 00:14:51.014 "is_configured": true, 00:14:51.014 "data_offset": 2048, 00:14:51.014 "data_size": 63488 00:14:51.014 }, 00:14:51.014 { 00:14:51.014 "name": "BaseBdev3", 00:14:51.014 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:51.014 "is_configured": true, 00:14:51.014 "data_offset": 2048, 00:14:51.014 "data_size": 63488 00:14:51.014 }, 00:14:51.014 { 00:14:51.014 "name": "BaseBdev4", 00:14:51.014 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:51.014 "is_configured": true, 00:14:51.014 "data_offset": 2048, 00:14:51.014 "data_size": 63488 00:14:51.014 } 00:14:51.014 ] 00:14:51.014 }' 00:14:51.014 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.273 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.273 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.273 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.273 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.273 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.273 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.273 [2024-12-12 16:11:17.438111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.273 [2024-12-12 16:11:17.490335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:51.273 [2024-12-12 16:11:17.595752] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.273 [2024-12-12 16:11:17.607085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.273 [2024-12-12 16:11:17.607139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.273 [2024-12-12 16:11:17.607156] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.532 [2024-12-12 16:11:17.638004] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.532 "name": "raid_bdev1", 00:14:51.532 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:51.532 "strip_size_kb": 0, 00:14:51.532 "state": "online", 00:14:51.532 "raid_level": "raid1", 00:14:51.532 "superblock": true, 00:14:51.532 "num_base_bdevs": 4, 00:14:51.532 "num_base_bdevs_discovered": 3, 00:14:51.532 "num_base_bdevs_operational": 3, 00:14:51.532 "base_bdevs_list": [ 00:14:51.532 { 00:14:51.532 "name": null, 00:14:51.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.532 "is_configured": false, 00:14:51.532 "data_offset": 0, 00:14:51.532 "data_size": 63488 00:14:51.532 }, 00:14:51.532 { 00:14:51.532 "name": "BaseBdev2", 00:14:51.532 "uuid": "d6c37c29-5bc1-55c2-8b39-28646fdabc78", 00:14:51.532 "is_configured": true, 00:14:51.532 "data_offset": 2048, 00:14:51.532 "data_size": 63488 00:14:51.532 }, 00:14:51.532 { 00:14:51.532 "name": "BaseBdev3", 00:14:51.532 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:51.532 "is_configured": true, 00:14:51.532 "data_offset": 2048, 00:14:51.532 "data_size": 63488 00:14:51.532 }, 00:14:51.532 { 00:14:51.532 "name": "BaseBdev4", 00:14:51.532 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:51.532 "is_configured": true, 00:14:51.532 "data_offset": 2048, 00:14:51.532 "data_size": 63488 00:14:51.532 } 00:14:51.532 ] 00:14:51.532 }' 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.532 16:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.792 140.50 IOPS, 421.50 MiB/s [2024-12-12T16:11:18.144Z] 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.792 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.051 "name": "raid_bdev1", 00:14:52.051 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:52.051 "strip_size_kb": 0, 00:14:52.051 "state": "online", 00:14:52.051 "raid_level": "raid1", 00:14:52.051 "superblock": true, 00:14:52.051 "num_base_bdevs": 4, 00:14:52.051 "num_base_bdevs_discovered": 3, 00:14:52.051 "num_base_bdevs_operational": 3, 00:14:52.051 "base_bdevs_list": [ 00:14:52.051 { 00:14:52.051 "name": null, 00:14:52.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.051 "is_configured": false, 00:14:52.051 "data_offset": 0, 00:14:52.051 "data_size": 63488 00:14:52.051 }, 00:14:52.051 { 00:14:52.051 "name": "BaseBdev2", 00:14:52.051 "uuid": "d6c37c29-5bc1-55c2-8b39-28646fdabc78", 00:14:52.051 "is_configured": true, 00:14:52.051 "data_offset": 2048, 00:14:52.051 "data_size": 63488 00:14:52.051 }, 00:14:52.051 { 00:14:52.051 "name": "BaseBdev3", 00:14:52.051 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:52.051 "is_configured": true, 00:14:52.051 "data_offset": 2048, 00:14:52.051 "data_size": 63488 00:14:52.051 }, 00:14:52.051 { 00:14:52.051 "name": "BaseBdev4", 00:14:52.051 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:52.051 "is_configured": true, 00:14:52.051 "data_offset": 2048, 00:14:52.051 "data_size": 63488 00:14:52.051 } 00:14:52.051 ] 00:14:52.051 }' 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.051 [2024-12-12 16:11:18.256214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.051 16:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:52.051 [2024-12-12 16:11:18.330756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:52.051 [2024-12-12 16:11:18.332811] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.310 [2024-12-12 16:11:18.442211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.310 [2024-12-12 16:11:18.443805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.568 [2024-12-12 16:11:18.667500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:52.568 [2024-12-12 16:11:18.667875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:52.827 134.00 IOPS, 402.00 MiB/s [2024-12-12T16:11:19.179Z] [2024-12-12 16:11:19.017398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:53.086 [2024-12-12 16:11:19.257524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.086 [2024-12-12 16:11:19.258368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.086 "name": "raid_bdev1", 00:14:53.086 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:53.086 "strip_size_kb": 0, 00:14:53.086 "state": "online", 00:14:53.086 "raid_level": "raid1", 00:14:53.086 "superblock": true, 00:14:53.086 "num_base_bdevs": 4, 00:14:53.086 "num_base_bdevs_discovered": 4, 00:14:53.086 "num_base_bdevs_operational": 4, 00:14:53.086 "process": { 00:14:53.086 "type": "rebuild", 00:14:53.086 "target": "spare", 00:14:53.086 "progress": { 00:14:53.086 "blocks": 10240, 00:14:53.086 "percent": 16 00:14:53.086 } 00:14:53.086 }, 00:14:53.086 "base_bdevs_list": [ 00:14:53.086 { 00:14:53.086 "name": "spare", 00:14:53.086 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:53.086 "is_configured": true, 00:14:53.086 "data_offset": 2048, 00:14:53.086 "data_size": 63488 00:14:53.086 }, 00:14:53.086 { 00:14:53.086 "name": "BaseBdev2", 00:14:53.086 "uuid": "d6c37c29-5bc1-55c2-8b39-28646fdabc78", 00:14:53.086 "is_configured": true, 00:14:53.086 "data_offset": 2048, 00:14:53.086 "data_size": 63488 00:14:53.086 }, 00:14:53.086 { 00:14:53.086 "name": "BaseBdev3", 00:14:53.086 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:53.086 "is_configured": true, 00:14:53.086 "data_offset": 2048, 00:14:53.086 "data_size": 63488 00:14:53.086 }, 00:14:53.086 { 00:14:53.086 "name": "BaseBdev4", 00:14:53.086 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:53.086 "is_configured": true, 00:14:53.086 "data_offset": 2048, 00:14:53.086 "data_size": 63488 00:14:53.086 } 00:14:53.086 ] 00:14:53.086 }' 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.086 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:53.345 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.345 [2024-12-12 16:11:19.462963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.345 [2024-12-12 16:11:19.682455] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:53.345 [2024-12-12 16:11:19.682513] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.345 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.605 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.605 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.605 "name": "raid_bdev1", 00:14:53.605 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:53.605 "strip_size_kb": 0, 00:14:53.605 "state": "online", 00:14:53.605 "raid_level": "raid1", 00:14:53.605 "superblock": true, 00:14:53.605 "num_base_bdevs": 4, 00:14:53.605 "num_base_bdevs_discovered": 3, 00:14:53.605 "num_base_bdevs_operational": 3, 00:14:53.605 "process": { 00:14:53.605 "type": "rebuild", 00:14:53.605 "target": "spare", 00:14:53.605 "progress": { 00:14:53.605 "blocks": 12288, 00:14:53.605 "percent": 19 00:14:53.605 } 00:14:53.605 }, 00:14:53.605 "base_bdevs_list": [ 00:14:53.605 { 00:14:53.605 "name": "spare", 00:14:53.605 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:53.605 "is_configured": true, 00:14:53.605 "data_offset": 2048, 00:14:53.605 "data_size": 63488 00:14:53.605 }, 00:14:53.605 { 00:14:53.605 "name": null, 00:14:53.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.605 "is_configured": false, 00:14:53.605 "data_offset": 0, 00:14:53.605 "data_size": 63488 00:14:53.605 }, 00:14:53.605 { 00:14:53.605 "name": "BaseBdev3", 00:14:53.605 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:53.605 "is_configured": true, 00:14:53.605 "data_offset": 2048, 00:14:53.605 "data_size": 63488 00:14:53.605 }, 00:14:53.605 { 00:14:53.605 "name": "BaseBdev4", 00:14:53.605 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:53.605 "is_configured": true, 00:14:53.605 "data_offset": 2048, 00:14:53.605 "data_size": 63488 00:14:53.605 } 00:14:53.605 ] 00:14:53.605 }' 00:14:53.605 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.605 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.605 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.605 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.605 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.606 [2024-12-12 16:11:19.827847] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.606 "name": "raid_bdev1", 00:14:53.606 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:53.606 "strip_size_kb": 0, 00:14:53.606 "state": "online", 00:14:53.606 "raid_level": "raid1", 00:14:53.606 "superblock": true, 00:14:53.606 "num_base_bdevs": 4, 00:14:53.606 "num_base_bdevs_discovered": 3, 00:14:53.606 "num_base_bdevs_operational": 3, 00:14:53.606 "process": { 00:14:53.606 "type": "rebuild", 00:14:53.606 "target": "spare", 00:14:53.606 "progress": { 00:14:53.606 "blocks": 12288, 00:14:53.606 "percent": 19 00:14:53.606 } 00:14:53.606 }, 00:14:53.606 "base_bdevs_list": [ 00:14:53.606 { 00:14:53.606 "name": "spare", 00:14:53.606 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:53.606 "is_configured": true, 00:14:53.606 "data_offset": 2048, 00:14:53.606 "data_size": 63488 00:14:53.606 }, 00:14:53.606 { 00:14:53.606 "name": null, 00:14:53.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.606 "is_configured": false, 00:14:53.606 "data_offset": 0, 00:14:53.606 "data_size": 63488 00:14:53.606 }, 00:14:53.606 { 00:14:53.606 "name": "BaseBdev3", 00:14:53.606 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:53.606 "is_configured": true, 00:14:53.606 "data_offset": 2048, 00:14:53.606 "data_size": 63488 00:14:53.606 }, 00:14:53.606 { 00:14:53.606 "name": "BaseBdev4", 00:14:53.606 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:53.606 "is_configured": true, 00:14:53.606 "data_offset": 2048, 00:14:53.606 "data_size": 63488 00:14:53.606 } 00:14:53.606 ] 00:14:53.606 }' 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.606 16:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.175 116.75 IOPS, 350.25 MiB/s [2024-12-12T16:11:20.527Z] [2024-12-12 16:11:20.274974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:54.175 [2024-12-12 16:11:20.275886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:54.435 [2024-12-12 16:11:20.703166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:54.695 [2024-12-12 16:11:20.817068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.695 107.00 IOPS, 321.00 MiB/s [2024-12-12T16:11:21.047Z] 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.695 "name": "raid_bdev1", 00:14:54.695 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:54.695 "strip_size_kb": 0, 00:14:54.695 "state": "online", 00:14:54.695 "raid_level": "raid1", 00:14:54.695 "superblock": true, 00:14:54.695 "num_base_bdevs": 4, 00:14:54.695 "num_base_bdevs_discovered": 3, 00:14:54.695 "num_base_bdevs_operational": 3, 00:14:54.695 "process": { 00:14:54.695 "type": "rebuild", 00:14:54.695 "target": "spare", 00:14:54.695 "progress": { 00:14:54.695 "blocks": 28672, 00:14:54.695 "percent": 45 00:14:54.695 } 00:14:54.695 }, 00:14:54.695 "base_bdevs_list": [ 00:14:54.695 { 00:14:54.695 "name": "spare", 00:14:54.695 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:54.695 "is_configured": true, 00:14:54.695 "data_offset": 2048, 00:14:54.695 "data_size": 63488 00:14:54.695 }, 00:14:54.695 { 00:14:54.695 "name": null, 00:14:54.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.695 "is_configured": false, 00:14:54.695 "data_offset": 0, 00:14:54.695 "data_size": 63488 00:14:54.695 }, 00:14:54.695 { 00:14:54.695 "name": "BaseBdev3", 00:14:54.695 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:54.695 "is_configured": true, 00:14:54.695 "data_offset": 2048, 00:14:54.695 "data_size": 63488 00:14:54.695 }, 00:14:54.695 { 00:14:54.695 "name": "BaseBdev4", 00:14:54.695 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:54.695 "is_configured": true, 00:14:54.695 "data_offset": 2048, 00:14:54.695 "data_size": 63488 00:14:54.695 } 00:14:54.695 ] 00:14:54.695 }' 00:14:54.695 16:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.695 16:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.695 16:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.955 16:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.955 16:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.955 [2024-12-12 16:11:21.136832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:55.214 [2024-12-12 16:11:21.484398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:55.474 [2024-12-12 16:11:21.702316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:55.734 [2024-12-12 16:11:21.925169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:55.734 95.17 IOPS, 285.50 MiB/s [2024-12-12T16:11:22.086Z] 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.734 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.734 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.734 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.734 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.734 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.994 "name": "raid_bdev1", 00:14:55.994 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:55.994 "strip_size_kb": 0, 00:14:55.994 "state": "online", 00:14:55.994 "raid_level": "raid1", 00:14:55.994 "superblock": true, 00:14:55.994 "num_base_bdevs": 4, 00:14:55.994 "num_base_bdevs_discovered": 3, 00:14:55.994 "num_base_bdevs_operational": 3, 00:14:55.994 "process": { 00:14:55.994 "type": "rebuild", 00:14:55.994 "target": "spare", 00:14:55.994 "progress": { 00:14:55.994 "blocks": 45056, 00:14:55.994 "percent": 70 00:14:55.994 } 00:14:55.994 }, 00:14:55.994 "base_bdevs_list": [ 00:14:55.994 { 00:14:55.994 "name": "spare", 00:14:55.994 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:55.994 "is_configured": true, 00:14:55.994 "data_offset": 2048, 00:14:55.994 "data_size": 63488 00:14:55.994 }, 00:14:55.994 { 00:14:55.994 "name": null, 00:14:55.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.994 "is_configured": false, 00:14:55.994 "data_offset": 0, 00:14:55.994 "data_size": 63488 00:14:55.994 }, 00:14:55.994 { 00:14:55.994 "name": "BaseBdev3", 00:14:55.994 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:55.994 "is_configured": true, 00:14:55.994 "data_offset": 2048, 00:14:55.994 "data_size": 63488 00:14:55.994 }, 00:14:55.994 { 00:14:55.994 "name": "BaseBdev4", 00:14:55.994 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:55.994 "is_configured": true, 00:14:55.994 "data_offset": 2048, 00:14:55.994 "data_size": 63488 00:14:55.994 } 00:14:55.994 ] 00:14:55.994 }' 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.994 [2024-12-12 16:11:22.139378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.994 16:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.934 85.57 IOPS, 256.71 MiB/s [2024-12-12T16:11:23.286Z] [2024-12-12 16:11:23.126799] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:56.934 [2024-12-12 16:11:23.226654] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:56.934 [2024-12-12 16:11:23.229810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.934 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.934 "name": "raid_bdev1", 00:14:56.934 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:56.934 "strip_size_kb": 0, 00:14:56.934 "state": "online", 00:14:56.934 "raid_level": "raid1", 00:14:56.934 "superblock": true, 00:14:56.934 "num_base_bdevs": 4, 00:14:56.934 "num_base_bdevs_discovered": 3, 00:14:56.934 "num_base_bdevs_operational": 3, 00:14:56.934 "base_bdevs_list": [ 00:14:56.934 { 00:14:56.934 "name": "spare", 00:14:56.934 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:56.934 "is_configured": true, 00:14:56.934 "data_offset": 2048, 00:14:56.934 "data_size": 63488 00:14:56.934 }, 00:14:56.934 { 00:14:56.934 "name": null, 00:14:56.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.934 "is_configured": false, 00:14:56.934 "data_offset": 0, 00:14:56.934 "data_size": 63488 00:14:56.934 }, 00:14:56.934 { 00:14:56.934 "name": "BaseBdev3", 00:14:56.934 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:56.934 "is_configured": true, 00:14:56.934 "data_offset": 2048, 00:14:56.934 "data_size": 63488 00:14:56.934 }, 00:14:56.934 { 00:14:56.934 "name": "BaseBdev4", 00:14:56.934 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:56.934 "is_configured": true, 00:14:56.934 "data_offset": 2048, 00:14:56.934 "data_size": 63488 00:14:56.934 } 00:14:56.934 ] 00:14:56.934 }' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.197 "name": "raid_bdev1", 00:14:57.197 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:57.197 "strip_size_kb": 0, 00:14:57.197 "state": "online", 00:14:57.197 "raid_level": "raid1", 00:14:57.197 "superblock": true, 00:14:57.197 "num_base_bdevs": 4, 00:14:57.197 "num_base_bdevs_discovered": 3, 00:14:57.197 "num_base_bdevs_operational": 3, 00:14:57.197 "base_bdevs_list": [ 00:14:57.197 { 00:14:57.197 "name": "spare", 00:14:57.197 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:57.197 "is_configured": true, 00:14:57.197 "data_offset": 2048, 00:14:57.197 "data_size": 63488 00:14:57.197 }, 00:14:57.197 { 00:14:57.197 "name": null, 00:14:57.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.197 "is_configured": false, 00:14:57.197 "data_offset": 0, 00:14:57.197 "data_size": 63488 00:14:57.197 }, 00:14:57.197 { 00:14:57.197 "name": "BaseBdev3", 00:14:57.197 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:57.197 "is_configured": true, 00:14:57.197 "data_offset": 2048, 00:14:57.197 "data_size": 63488 00:14:57.197 }, 00:14:57.197 { 00:14:57.197 "name": "BaseBdev4", 00:14:57.197 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:57.197 "is_configured": true, 00:14:57.197 "data_offset": 2048, 00:14:57.197 "data_size": 63488 00:14:57.197 } 00:14:57.197 ] 00:14:57.197 }' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.197 "name": "raid_bdev1", 00:14:57.197 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:57.197 "strip_size_kb": 0, 00:14:57.197 "state": "online", 00:14:57.197 "raid_level": "raid1", 00:14:57.197 "superblock": true, 00:14:57.197 "num_base_bdevs": 4, 00:14:57.197 "num_base_bdevs_discovered": 3, 00:14:57.197 "num_base_bdevs_operational": 3, 00:14:57.197 "base_bdevs_list": [ 00:14:57.197 { 00:14:57.197 "name": "spare", 00:14:57.197 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:57.197 "is_configured": true, 00:14:57.197 "data_offset": 2048, 00:14:57.197 "data_size": 63488 00:14:57.197 }, 00:14:57.197 { 00:14:57.197 "name": null, 00:14:57.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.197 "is_configured": false, 00:14:57.197 "data_offset": 0, 00:14:57.197 "data_size": 63488 00:14:57.197 }, 00:14:57.197 { 00:14:57.197 "name": "BaseBdev3", 00:14:57.197 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:57.197 "is_configured": true, 00:14:57.197 "data_offset": 2048, 00:14:57.197 "data_size": 63488 00:14:57.197 }, 00:14:57.197 { 00:14:57.197 "name": "BaseBdev4", 00:14:57.197 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:57.197 "is_configured": true, 00:14:57.197 "data_offset": 2048, 00:14:57.197 "data_size": 63488 00:14:57.197 } 00:14:57.197 ] 00:14:57.197 }' 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.197 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.770 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.770 16:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 [2024-12-12 16:11:23.936106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.770 [2024-12-12 16:11:23.936143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.770 80.62 IOPS, 241.88 MiB/s 00:14:57.770 Latency(us) 00:14:57.770 [2024-12-12T16:11:24.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.770 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:57.770 raid_bdev1 : 8.07 80.18 240.55 0.00 0.00 16735.84 323.74 118136.51 00:14:57.770 [2024-12-12T16:11:24.122Z] =================================================================================================================== 00:14:57.770 [2024-12-12T16:11:24.122Z] Total : 80.18 240.55 0.00 0.00 16735.84 323.74 118136.51 00:14:57.770 [2024-12-12 16:11:24.028625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.770 [2024-12-12 16:11:24.028701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.770 [2024-12-12 16:11:24.028798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.770 [2024-12-12 16:11:24.028810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:57.770 { 00:14:57.770 "results": [ 00:14:57.770 { 00:14:57.770 "job": "raid_bdev1", 00:14:57.770 "core_mask": "0x1", 00:14:57.770 "workload": "randrw", 00:14:57.770 "percentage": 50, 00:14:57.770 "status": "finished", 00:14:57.770 "queue_depth": 2, 00:14:57.770 "io_size": 3145728, 00:14:57.770 "runtime": 8.06903, 00:14:57.770 "iops": 80.18311990412727, 00:14:57.770 "mibps": 240.5493597123818, 00:14:57.770 "io_failed": 0, 00:14:57.770 "io_timeout": 0, 00:14:57.770 "avg_latency_us": 16735.842206218826, 00:14:57.770 "min_latency_us": 323.74497816593885, 00:14:57.770 "max_latency_us": 118136.51004366812 00:14:57.770 } 00:14:57.770 ], 00:14:57.770 "core_count": 1 00:14:57.770 } 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.770 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:58.030 /dev/nbd0 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.030 1+0 records in 00:14:58.030 1+0 records out 00:14:58.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389355 s, 10.5 MB/s 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:58.030 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.031 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:58.291 /dev/nbd1 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.291 1+0 records in 00:14:58.291 1+0 records out 00:14:58.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468376 s, 8.7 MB/s 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.291 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:58.551 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:58.551 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.551 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:58.551 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.551 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:58.551 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.551 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.810 16:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.810 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:59.069 /dev/nbd1 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.069 1+0 records in 00:14:59.069 1+0 records out 00:14:59.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293719 s, 13.9 MB/s 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.069 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.328 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.588 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.588 [2024-12-12 16:11:25.813188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.588 [2024-12-12 16:11:25.813245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.588 [2024-12-12 16:11:25.813266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:59.588 [2024-12-12 16:11:25.813286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.588 [2024-12-12 16:11:25.815526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.588 [2024-12-12 16:11:25.815577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.588 [2024-12-12 16:11:25.815688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:59.589 [2024-12-12 16:11:25.815749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.589 [2024-12-12 16:11:25.815890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.589 [2024-12-12 16:11:25.816012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.589 spare 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.589 [2024-12-12 16:11:25.915920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:59.589 [2024-12-12 16:11:25.915966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.589 [2024-12-12 16:11:25.916266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:59.589 [2024-12-12 16:11:25.916457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:59.589 [2024-12-12 16:11:25.916468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:59.589 [2024-12-12 16:11:25.916646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.589 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.849 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.849 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.849 "name": "raid_bdev1", 00:14:59.849 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:14:59.849 "strip_size_kb": 0, 00:14:59.849 "state": "online", 00:14:59.849 "raid_level": "raid1", 00:14:59.849 "superblock": true, 00:14:59.849 "num_base_bdevs": 4, 00:14:59.849 "num_base_bdevs_discovered": 3, 00:14:59.849 "num_base_bdevs_operational": 3, 00:14:59.849 "base_bdevs_list": [ 00:14:59.849 { 00:14:59.849 "name": "spare", 00:14:59.849 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:14:59.849 "is_configured": true, 00:14:59.849 "data_offset": 2048, 00:14:59.849 "data_size": 63488 00:14:59.849 }, 00:14:59.849 { 00:14:59.849 "name": null, 00:14:59.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.849 "is_configured": false, 00:14:59.849 "data_offset": 2048, 00:14:59.849 "data_size": 63488 00:14:59.849 }, 00:14:59.849 { 00:14:59.849 "name": "BaseBdev3", 00:14:59.849 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:14:59.849 "is_configured": true, 00:14:59.849 "data_offset": 2048, 00:14:59.849 "data_size": 63488 00:14:59.849 }, 00:14:59.849 { 00:14:59.849 "name": "BaseBdev4", 00:14:59.849 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:14:59.849 "is_configured": true, 00:14:59.849 "data_offset": 2048, 00:14:59.849 "data_size": 63488 00:14:59.849 } 00:14:59.849 ] 00:14:59.849 }' 00:14:59.849 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.849 16:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.109 "name": "raid_bdev1", 00:15:00.109 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:00.109 "strip_size_kb": 0, 00:15:00.109 "state": "online", 00:15:00.109 "raid_level": "raid1", 00:15:00.109 "superblock": true, 00:15:00.109 "num_base_bdevs": 4, 00:15:00.109 "num_base_bdevs_discovered": 3, 00:15:00.109 "num_base_bdevs_operational": 3, 00:15:00.109 "base_bdevs_list": [ 00:15:00.109 { 00:15:00.109 "name": "spare", 00:15:00.109 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:15:00.109 "is_configured": true, 00:15:00.109 "data_offset": 2048, 00:15:00.109 "data_size": 63488 00:15:00.109 }, 00:15:00.109 { 00:15:00.109 "name": null, 00:15:00.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.109 "is_configured": false, 00:15:00.109 "data_offset": 2048, 00:15:00.109 "data_size": 63488 00:15:00.109 }, 00:15:00.109 { 00:15:00.109 "name": "BaseBdev3", 00:15:00.109 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:00.109 "is_configured": true, 00:15:00.109 "data_offset": 2048, 00:15:00.109 "data_size": 63488 00:15:00.109 }, 00:15:00.109 { 00:15:00.109 "name": "BaseBdev4", 00:15:00.109 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:00.109 "is_configured": true, 00:15:00.109 "data_offset": 2048, 00:15:00.109 "data_size": 63488 00:15:00.109 } 00:15:00.109 ] 00:15:00.109 }' 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.109 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 [2024-12-12 16:11:26.488200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.370 "name": "raid_bdev1", 00:15:00.370 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:00.370 "strip_size_kb": 0, 00:15:00.370 "state": "online", 00:15:00.370 "raid_level": "raid1", 00:15:00.370 "superblock": true, 00:15:00.370 "num_base_bdevs": 4, 00:15:00.370 "num_base_bdevs_discovered": 2, 00:15:00.370 "num_base_bdevs_operational": 2, 00:15:00.370 "base_bdevs_list": [ 00:15:00.370 { 00:15:00.370 "name": null, 00:15:00.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.370 "is_configured": false, 00:15:00.370 "data_offset": 0, 00:15:00.370 "data_size": 63488 00:15:00.370 }, 00:15:00.370 { 00:15:00.370 "name": null, 00:15:00.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.370 "is_configured": false, 00:15:00.370 "data_offset": 2048, 00:15:00.370 "data_size": 63488 00:15:00.370 }, 00:15:00.370 { 00:15:00.370 "name": "BaseBdev3", 00:15:00.370 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:00.370 "is_configured": true, 00:15:00.370 "data_offset": 2048, 00:15:00.370 "data_size": 63488 00:15:00.370 }, 00:15:00.370 { 00:15:00.370 "name": "BaseBdev4", 00:15:00.370 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:00.370 "is_configured": true, 00:15:00.370 "data_offset": 2048, 00:15:00.370 "data_size": 63488 00:15:00.370 } 00:15:00.370 ] 00:15:00.370 }' 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.370 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.629 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.629 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.629 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.629 [2024-12-12 16:11:26.955755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.629 [2024-12-12 16:11:26.955996] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:00.629 [2024-12-12 16:11:26.956017] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:00.629 [2024-12-12 16:11:26.956051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.629 [2024-12-12 16:11:26.970795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:00.629 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.629 16:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:00.629 [2024-12-12 16:11:26.972675] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.010 16:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.010 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.010 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.010 "name": "raid_bdev1", 00:15:02.010 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:02.010 "strip_size_kb": 0, 00:15:02.010 "state": "online", 00:15:02.010 "raid_level": "raid1", 00:15:02.010 "superblock": true, 00:15:02.010 "num_base_bdevs": 4, 00:15:02.010 "num_base_bdevs_discovered": 3, 00:15:02.010 "num_base_bdevs_operational": 3, 00:15:02.010 "process": { 00:15:02.010 "type": "rebuild", 00:15:02.010 "target": "spare", 00:15:02.010 "progress": { 00:15:02.010 "blocks": 20480, 00:15:02.010 "percent": 32 00:15:02.010 } 00:15:02.010 }, 00:15:02.010 "base_bdevs_list": [ 00:15:02.010 { 00:15:02.010 "name": "spare", 00:15:02.010 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:15:02.010 "is_configured": true, 00:15:02.010 "data_offset": 2048, 00:15:02.010 "data_size": 63488 00:15:02.010 }, 00:15:02.010 { 00:15:02.010 "name": null, 00:15:02.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.010 "is_configured": false, 00:15:02.010 "data_offset": 2048, 00:15:02.010 "data_size": 63488 00:15:02.011 }, 00:15:02.011 { 00:15:02.011 "name": "BaseBdev3", 00:15:02.011 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:02.011 "is_configured": true, 00:15:02.011 "data_offset": 2048, 00:15:02.011 "data_size": 63488 00:15:02.011 }, 00:15:02.011 { 00:15:02.011 "name": "BaseBdev4", 00:15:02.011 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:02.011 "is_configured": true, 00:15:02.011 "data_offset": 2048, 00:15:02.011 "data_size": 63488 00:15:02.011 } 00:15:02.011 ] 00:15:02.011 }' 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.011 [2024-12-12 16:11:28.109048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.011 [2024-12-12 16:11:28.178704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.011 [2024-12-12 16:11:28.178812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.011 [2024-12-12 16:11:28.178828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.011 [2024-12-12 16:11:28.178837] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.011 "name": "raid_bdev1", 00:15:02.011 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:02.011 "strip_size_kb": 0, 00:15:02.011 "state": "online", 00:15:02.011 "raid_level": "raid1", 00:15:02.011 "superblock": true, 00:15:02.011 "num_base_bdevs": 4, 00:15:02.011 "num_base_bdevs_discovered": 2, 00:15:02.011 "num_base_bdevs_operational": 2, 00:15:02.011 "base_bdevs_list": [ 00:15:02.011 { 00:15:02.011 "name": null, 00:15:02.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.011 "is_configured": false, 00:15:02.011 "data_offset": 0, 00:15:02.011 "data_size": 63488 00:15:02.011 }, 00:15:02.011 { 00:15:02.011 "name": null, 00:15:02.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.011 "is_configured": false, 00:15:02.011 "data_offset": 2048, 00:15:02.011 "data_size": 63488 00:15:02.011 }, 00:15:02.011 { 00:15:02.011 "name": "BaseBdev3", 00:15:02.011 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:02.011 "is_configured": true, 00:15:02.011 "data_offset": 2048, 00:15:02.011 "data_size": 63488 00:15:02.011 }, 00:15:02.011 { 00:15:02.011 "name": "BaseBdev4", 00:15:02.011 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:02.011 "is_configured": true, 00:15:02.011 "data_offset": 2048, 00:15:02.011 "data_size": 63488 00:15:02.011 } 00:15:02.011 ] 00:15:02.011 }' 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.011 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.580 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.580 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.580 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.580 [2024-12-12 16:11:28.650689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.580 [2024-12-12 16:11:28.650836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.580 [2024-12-12 16:11:28.650884] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:02.580 [2024-12-12 16:11:28.650927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.580 [2024-12-12 16:11:28.651437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.580 [2024-12-12 16:11:28.651502] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.580 [2024-12-12 16:11:28.651643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:02.580 [2024-12-12 16:11:28.651687] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:02.580 [2024-12-12 16:11:28.651725] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:02.580 [2024-12-12 16:11:28.651792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.580 [2024-12-12 16:11:28.666477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:02.580 spare 00:15:02.580 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.580 [2024-12-12 16:11:28.668443] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.580 16:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.519 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.519 "name": "raid_bdev1", 00:15:03.519 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:03.519 "strip_size_kb": 0, 00:15:03.519 "state": "online", 00:15:03.519 "raid_level": "raid1", 00:15:03.519 "superblock": true, 00:15:03.519 "num_base_bdevs": 4, 00:15:03.519 "num_base_bdevs_discovered": 3, 00:15:03.519 "num_base_bdevs_operational": 3, 00:15:03.519 "process": { 00:15:03.519 "type": "rebuild", 00:15:03.519 "target": "spare", 00:15:03.519 "progress": { 00:15:03.519 "blocks": 20480, 00:15:03.519 "percent": 32 00:15:03.519 } 00:15:03.519 }, 00:15:03.519 "base_bdevs_list": [ 00:15:03.519 { 00:15:03.519 "name": "spare", 00:15:03.519 "uuid": "ffd185e9-2ca0-5423-917a-ea06b76625d4", 00:15:03.519 "is_configured": true, 00:15:03.519 "data_offset": 2048, 00:15:03.519 "data_size": 63488 00:15:03.519 }, 00:15:03.519 { 00:15:03.519 "name": null, 00:15:03.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.520 "is_configured": false, 00:15:03.520 "data_offset": 2048, 00:15:03.520 "data_size": 63488 00:15:03.520 }, 00:15:03.520 { 00:15:03.520 "name": "BaseBdev3", 00:15:03.520 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:03.520 "is_configured": true, 00:15:03.520 "data_offset": 2048, 00:15:03.520 "data_size": 63488 00:15:03.520 }, 00:15:03.520 { 00:15:03.520 "name": "BaseBdev4", 00:15:03.520 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:03.520 "is_configured": true, 00:15:03.520 "data_offset": 2048, 00:15:03.520 "data_size": 63488 00:15:03.520 } 00:15:03.520 ] 00:15:03.520 }' 00:15:03.520 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.520 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.520 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.520 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.520 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:03.520 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.520 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.520 [2024-12-12 16:11:29.836292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.780 [2024-12-12 16:11:29.874477] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.780 [2024-12-12 16:11:29.874532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.780 [2024-12-12 16:11:29.874551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.780 [2024-12-12 16:11:29.874559] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.780 "name": "raid_bdev1", 00:15:03.780 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:03.780 "strip_size_kb": 0, 00:15:03.780 "state": "online", 00:15:03.780 "raid_level": "raid1", 00:15:03.780 "superblock": true, 00:15:03.780 "num_base_bdevs": 4, 00:15:03.780 "num_base_bdevs_discovered": 2, 00:15:03.780 "num_base_bdevs_operational": 2, 00:15:03.780 "base_bdevs_list": [ 00:15:03.780 { 00:15:03.780 "name": null, 00:15:03.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.780 "is_configured": false, 00:15:03.780 "data_offset": 0, 00:15:03.780 "data_size": 63488 00:15:03.780 }, 00:15:03.780 { 00:15:03.780 "name": null, 00:15:03.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.780 "is_configured": false, 00:15:03.780 "data_offset": 2048, 00:15:03.780 "data_size": 63488 00:15:03.780 }, 00:15:03.780 { 00:15:03.780 "name": "BaseBdev3", 00:15:03.780 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:03.780 "is_configured": true, 00:15:03.780 "data_offset": 2048, 00:15:03.780 "data_size": 63488 00:15:03.780 }, 00:15:03.780 { 00:15:03.780 "name": "BaseBdev4", 00:15:03.780 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:03.780 "is_configured": true, 00:15:03.780 "data_offset": 2048, 00:15:03.780 "data_size": 63488 00:15:03.780 } 00:15:03.780 ] 00:15:03.780 }' 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.780 16:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.040 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.301 "name": "raid_bdev1", 00:15:04.301 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:04.301 "strip_size_kb": 0, 00:15:04.301 "state": "online", 00:15:04.301 "raid_level": "raid1", 00:15:04.301 "superblock": true, 00:15:04.301 "num_base_bdevs": 4, 00:15:04.301 "num_base_bdevs_discovered": 2, 00:15:04.301 "num_base_bdevs_operational": 2, 00:15:04.301 "base_bdevs_list": [ 00:15:04.301 { 00:15:04.301 "name": null, 00:15:04.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.301 "is_configured": false, 00:15:04.301 "data_offset": 0, 00:15:04.301 "data_size": 63488 00:15:04.301 }, 00:15:04.301 { 00:15:04.301 "name": null, 00:15:04.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.301 "is_configured": false, 00:15:04.301 "data_offset": 2048, 00:15:04.301 "data_size": 63488 00:15:04.301 }, 00:15:04.301 { 00:15:04.301 "name": "BaseBdev3", 00:15:04.301 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:04.301 "is_configured": true, 00:15:04.301 "data_offset": 2048, 00:15:04.301 "data_size": 63488 00:15:04.301 }, 00:15:04.301 { 00:15:04.301 "name": "BaseBdev4", 00:15:04.301 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:04.301 "is_configured": true, 00:15:04.301 "data_offset": 2048, 00:15:04.301 "data_size": 63488 00:15:04.301 } 00:15:04.301 ] 00:15:04.301 }' 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 [2024-12-12 16:11:30.518452] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.301 [2024-12-12 16:11:30.518599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.301 [2024-12-12 16:11:30.518643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:04.301 [2024-12-12 16:11:30.518670] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.301 [2024-12-12 16:11:30.519191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.301 [2024-12-12 16:11:30.519251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.301 [2024-12-12 16:11:30.519381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:04.301 [2024-12-12 16:11:30.519421] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:04.301 [2024-12-12 16:11:30.519461] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:04.301 [2024-12-12 16:11:30.519500] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:04.301 BaseBdev1 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 16:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.241 "name": "raid_bdev1", 00:15:05.241 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:05.241 "strip_size_kb": 0, 00:15:05.241 "state": "online", 00:15:05.241 "raid_level": "raid1", 00:15:05.241 "superblock": true, 00:15:05.241 "num_base_bdevs": 4, 00:15:05.241 "num_base_bdevs_discovered": 2, 00:15:05.241 "num_base_bdevs_operational": 2, 00:15:05.241 "base_bdevs_list": [ 00:15:05.241 { 00:15:05.241 "name": null, 00:15:05.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.241 "is_configured": false, 00:15:05.241 "data_offset": 0, 00:15:05.241 "data_size": 63488 00:15:05.241 }, 00:15:05.241 { 00:15:05.241 "name": null, 00:15:05.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.241 "is_configured": false, 00:15:05.241 "data_offset": 2048, 00:15:05.241 "data_size": 63488 00:15:05.241 }, 00:15:05.241 { 00:15:05.241 "name": "BaseBdev3", 00:15:05.241 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:05.241 "is_configured": true, 00:15:05.241 "data_offset": 2048, 00:15:05.241 "data_size": 63488 00:15:05.241 }, 00:15:05.241 { 00:15:05.241 "name": "BaseBdev4", 00:15:05.241 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:05.241 "is_configured": true, 00:15:05.241 "data_offset": 2048, 00:15:05.241 "data_size": 63488 00:15:05.241 } 00:15:05.241 ] 00:15:05.241 }' 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.241 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.811 "name": "raid_bdev1", 00:15:05.811 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:05.811 "strip_size_kb": 0, 00:15:05.811 "state": "online", 00:15:05.811 "raid_level": "raid1", 00:15:05.811 "superblock": true, 00:15:05.811 "num_base_bdevs": 4, 00:15:05.811 "num_base_bdevs_discovered": 2, 00:15:05.811 "num_base_bdevs_operational": 2, 00:15:05.811 "base_bdevs_list": [ 00:15:05.811 { 00:15:05.811 "name": null, 00:15:05.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.811 "is_configured": false, 00:15:05.811 "data_offset": 0, 00:15:05.811 "data_size": 63488 00:15:05.811 }, 00:15:05.811 { 00:15:05.811 "name": null, 00:15:05.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.811 "is_configured": false, 00:15:05.811 "data_offset": 2048, 00:15:05.811 "data_size": 63488 00:15:05.811 }, 00:15:05.811 { 00:15:05.811 "name": "BaseBdev3", 00:15:05.811 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:05.811 "is_configured": true, 00:15:05.811 "data_offset": 2048, 00:15:05.811 "data_size": 63488 00:15:05.811 }, 00:15:05.811 { 00:15:05.811 "name": "BaseBdev4", 00:15:05.811 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:05.811 "is_configured": true, 00:15:05.811 "data_offset": 2048, 00:15:05.811 "data_size": 63488 00:15:05.811 } 00:15:05.811 ] 00:15:05.811 }' 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.811 16:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.811 [2024-12-12 16:11:32.036077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.811 [2024-12-12 16:11:32.036260] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:05.811 [2024-12-12 16:11:32.036279] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:05.811 request: 00:15:05.811 { 00:15:05.811 "base_bdev": "BaseBdev1", 00:15:05.811 "raid_bdev": "raid_bdev1", 00:15:05.811 "method": "bdev_raid_add_base_bdev", 00:15:05.811 "req_id": 1 00:15:05.811 } 00:15:05.811 Got JSON-RPC error response 00:15:05.811 response: 00:15:05.811 { 00:15:05.811 "code": -22, 00:15:05.811 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:05.811 } 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.811 16:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.750 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.751 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.751 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.751 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.751 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.010 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.010 "name": "raid_bdev1", 00:15:07.010 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:07.010 "strip_size_kb": 0, 00:15:07.010 "state": "online", 00:15:07.010 "raid_level": "raid1", 00:15:07.010 "superblock": true, 00:15:07.010 "num_base_bdevs": 4, 00:15:07.010 "num_base_bdevs_discovered": 2, 00:15:07.010 "num_base_bdevs_operational": 2, 00:15:07.010 "base_bdevs_list": [ 00:15:07.010 { 00:15:07.010 "name": null, 00:15:07.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.010 "is_configured": false, 00:15:07.010 "data_offset": 0, 00:15:07.010 "data_size": 63488 00:15:07.010 }, 00:15:07.010 { 00:15:07.010 "name": null, 00:15:07.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.010 "is_configured": false, 00:15:07.010 "data_offset": 2048, 00:15:07.010 "data_size": 63488 00:15:07.010 }, 00:15:07.010 { 00:15:07.010 "name": "BaseBdev3", 00:15:07.010 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:07.010 "is_configured": true, 00:15:07.010 "data_offset": 2048, 00:15:07.010 "data_size": 63488 00:15:07.010 }, 00:15:07.010 { 00:15:07.010 "name": "BaseBdev4", 00:15:07.010 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:07.010 "is_configured": true, 00:15:07.010 "data_offset": 2048, 00:15:07.010 "data_size": 63488 00:15:07.010 } 00:15:07.010 ] 00:15:07.010 }' 00:15:07.010 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.010 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.270 "name": "raid_bdev1", 00:15:07.270 "uuid": "8eb7ec9a-770d-4691-9352-d3031aac9de2", 00:15:07.270 "strip_size_kb": 0, 00:15:07.270 "state": "online", 00:15:07.270 "raid_level": "raid1", 00:15:07.270 "superblock": true, 00:15:07.270 "num_base_bdevs": 4, 00:15:07.270 "num_base_bdevs_discovered": 2, 00:15:07.270 "num_base_bdevs_operational": 2, 00:15:07.270 "base_bdevs_list": [ 00:15:07.270 { 00:15:07.270 "name": null, 00:15:07.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.270 "is_configured": false, 00:15:07.270 "data_offset": 0, 00:15:07.270 "data_size": 63488 00:15:07.270 }, 00:15:07.270 { 00:15:07.270 "name": null, 00:15:07.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.270 "is_configured": false, 00:15:07.270 "data_offset": 2048, 00:15:07.270 "data_size": 63488 00:15:07.270 }, 00:15:07.270 { 00:15:07.270 "name": "BaseBdev3", 00:15:07.270 "uuid": "955cc048-f22c-554c-9200-0bd711268585", 00:15:07.270 "is_configured": true, 00:15:07.270 "data_offset": 2048, 00:15:07.270 "data_size": 63488 00:15:07.270 }, 00:15:07.270 { 00:15:07.270 "name": "BaseBdev4", 00:15:07.270 "uuid": "01f3b9a9-821b-56eb-93d3-8b6d2b1aa2aa", 00:15:07.270 "is_configured": true, 00:15:07.270 "data_offset": 2048, 00:15:07.270 "data_size": 63488 00:15:07.270 } 00:15:07.270 ] 00:15:07.270 }' 00:15:07.270 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 81244 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 81244 ']' 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 81244 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81244 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.531 killing process with pid 81244 00:15:07.531 Received shutdown signal, test time was about 17.780027 seconds 00:15:07.531 00:15:07.531 Latency(us) 00:15:07.531 [2024-12-12T16:11:33.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.531 [2024-12-12T16:11:33.883Z] =================================================================================================================== 00:15:07.531 [2024-12-12T16:11:33.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81244' 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 81244 00:15:07.531 [2024-12-12 16:11:33.700219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.531 [2024-12-12 16:11:33.700364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.531 16:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 81244 00:15:07.531 [2024-12-12 16:11:33.700439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.531 [2024-12-12 16:11:33.700451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:07.791 [2024-12-12 16:11:34.104580] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.173 ************************************ 00:15:09.173 END TEST raid_rebuild_test_sb_io 00:15:09.173 ************************************ 00:15:09.173 16:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.173 00:15:09.173 real 0m21.214s 00:15:09.173 user 0m27.638s 00:15:09.173 sys 0m2.611s 00:15:09.173 16:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.173 16:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.173 16:11:35 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:09.173 16:11:35 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:09.173 16:11:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:09.173 16:11:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.173 16:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.173 ************************************ 00:15:09.174 START TEST raid5f_state_function_test 00:15:09.174 ************************************ 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81969 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:09.174 Process raid pid: 81969 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81969' 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81969 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81969 ']' 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.174 16:11:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.174 [2024-12-12 16:11:35.415037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:09.174 [2024-12-12 16:11:35.415232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.434 [2024-12-12 16:11:35.591703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.434 [2024-12-12 16:11:35.703818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.693 [2024-12-12 16:11:35.909263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.693 [2024-12-12 16:11:35.909295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.953 [2024-12-12 16:11:36.256191] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.953 [2024-12-12 16:11:36.256306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.953 [2024-12-12 16:11:36.256341] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.953 [2024-12-12 16:11:36.256355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.953 [2024-12-12 16:11:36.256362] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:09.953 [2024-12-12 16:11:36.256370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.953 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.954 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.213 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.213 "name": "Existed_Raid", 00:15:10.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.213 "strip_size_kb": 64, 00:15:10.213 "state": "configuring", 00:15:10.213 "raid_level": "raid5f", 00:15:10.213 "superblock": false, 00:15:10.213 "num_base_bdevs": 3, 00:15:10.213 "num_base_bdevs_discovered": 0, 00:15:10.213 "num_base_bdevs_operational": 3, 00:15:10.213 "base_bdevs_list": [ 00:15:10.213 { 00:15:10.213 "name": "BaseBdev1", 00:15:10.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.213 "is_configured": false, 00:15:10.213 "data_offset": 0, 00:15:10.213 "data_size": 0 00:15:10.213 }, 00:15:10.213 { 00:15:10.213 "name": "BaseBdev2", 00:15:10.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.213 "is_configured": false, 00:15:10.213 "data_offset": 0, 00:15:10.213 "data_size": 0 00:15:10.213 }, 00:15:10.213 { 00:15:10.213 "name": "BaseBdev3", 00:15:10.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.213 "is_configured": false, 00:15:10.213 "data_offset": 0, 00:15:10.213 "data_size": 0 00:15:10.213 } 00:15:10.213 ] 00:15:10.213 }' 00:15:10.213 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.214 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.473 [2024-12-12 16:11:36.743394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.473 [2024-12-12 16:11:36.743495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.473 [2024-12-12 16:11:36.755350] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.473 [2024-12-12 16:11:36.755448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.473 [2024-12-12 16:11:36.755479] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.473 [2024-12-12 16:11:36.755501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.473 [2024-12-12 16:11:36.755519] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.473 [2024-12-12 16:11:36.755539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.473 [2024-12-12 16:11:36.802714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.473 BaseBdev1 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.473 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.737 [ 00:15:10.737 { 00:15:10.737 "name": "BaseBdev1", 00:15:10.737 "aliases": [ 00:15:10.737 "05a5a570-68cd-4531-8fcc-bcab3c73e67f" 00:15:10.737 ], 00:15:10.737 "product_name": "Malloc disk", 00:15:10.737 "block_size": 512, 00:15:10.737 "num_blocks": 65536, 00:15:10.737 "uuid": "05a5a570-68cd-4531-8fcc-bcab3c73e67f", 00:15:10.737 "assigned_rate_limits": { 00:15:10.737 "rw_ios_per_sec": 0, 00:15:10.737 "rw_mbytes_per_sec": 0, 00:15:10.737 "r_mbytes_per_sec": 0, 00:15:10.737 "w_mbytes_per_sec": 0 00:15:10.737 }, 00:15:10.737 "claimed": true, 00:15:10.737 "claim_type": "exclusive_write", 00:15:10.737 "zoned": false, 00:15:10.737 "supported_io_types": { 00:15:10.737 "read": true, 00:15:10.737 "write": true, 00:15:10.737 "unmap": true, 00:15:10.737 "flush": true, 00:15:10.737 "reset": true, 00:15:10.737 "nvme_admin": false, 00:15:10.737 "nvme_io": false, 00:15:10.737 "nvme_io_md": false, 00:15:10.737 "write_zeroes": true, 00:15:10.737 "zcopy": true, 00:15:10.737 "get_zone_info": false, 00:15:10.737 "zone_management": false, 00:15:10.737 "zone_append": false, 00:15:10.737 "compare": false, 00:15:10.737 "compare_and_write": false, 00:15:10.737 "abort": true, 00:15:10.737 "seek_hole": false, 00:15:10.737 "seek_data": false, 00:15:10.737 "copy": true, 00:15:10.737 "nvme_iov_md": false 00:15:10.737 }, 00:15:10.737 "memory_domains": [ 00:15:10.737 { 00:15:10.737 "dma_device_id": "system", 00:15:10.737 "dma_device_type": 1 00:15:10.737 }, 00:15:10.737 { 00:15:10.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.737 "dma_device_type": 2 00:15:10.737 } 00:15:10.737 ], 00:15:10.737 "driver_specific": {} 00:15:10.737 } 00:15:10.737 ] 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.737 "name": "Existed_Raid", 00:15:10.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.737 "strip_size_kb": 64, 00:15:10.737 "state": "configuring", 00:15:10.737 "raid_level": "raid5f", 00:15:10.737 "superblock": false, 00:15:10.737 "num_base_bdevs": 3, 00:15:10.737 "num_base_bdevs_discovered": 1, 00:15:10.737 "num_base_bdevs_operational": 3, 00:15:10.737 "base_bdevs_list": [ 00:15:10.737 { 00:15:10.737 "name": "BaseBdev1", 00:15:10.737 "uuid": "05a5a570-68cd-4531-8fcc-bcab3c73e67f", 00:15:10.737 "is_configured": true, 00:15:10.737 "data_offset": 0, 00:15:10.737 "data_size": 65536 00:15:10.737 }, 00:15:10.737 { 00:15:10.737 "name": "BaseBdev2", 00:15:10.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.737 "is_configured": false, 00:15:10.737 "data_offset": 0, 00:15:10.737 "data_size": 0 00:15:10.737 }, 00:15:10.737 { 00:15:10.737 "name": "BaseBdev3", 00:15:10.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.737 "is_configured": false, 00:15:10.737 "data_offset": 0, 00:15:10.737 "data_size": 0 00:15:10.737 } 00:15:10.737 ] 00:15:10.737 }' 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.737 16:11:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.005 [2024-12-12 16:11:37.294039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.005 [2024-12-12 16:11:37.294136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.005 [2024-12-12 16:11:37.306037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.005 [2024-12-12 16:11:37.308344] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.005 [2024-12-12 16:11:37.308406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.005 [2024-12-12 16:11:37.308422] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.005 [2024-12-12 16:11:37.308435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:11.005 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.006 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.265 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.265 "name": "Existed_Raid", 00:15:11.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.265 "strip_size_kb": 64, 00:15:11.265 "state": "configuring", 00:15:11.265 "raid_level": "raid5f", 00:15:11.265 "superblock": false, 00:15:11.265 "num_base_bdevs": 3, 00:15:11.265 "num_base_bdevs_discovered": 1, 00:15:11.265 "num_base_bdevs_operational": 3, 00:15:11.265 "base_bdevs_list": [ 00:15:11.265 { 00:15:11.265 "name": "BaseBdev1", 00:15:11.265 "uuid": "05a5a570-68cd-4531-8fcc-bcab3c73e67f", 00:15:11.265 "is_configured": true, 00:15:11.265 "data_offset": 0, 00:15:11.265 "data_size": 65536 00:15:11.265 }, 00:15:11.265 { 00:15:11.265 "name": "BaseBdev2", 00:15:11.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.265 "is_configured": false, 00:15:11.265 "data_offset": 0, 00:15:11.265 "data_size": 0 00:15:11.265 }, 00:15:11.265 { 00:15:11.265 "name": "BaseBdev3", 00:15:11.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.265 "is_configured": false, 00:15:11.265 "data_offset": 0, 00:15:11.265 "data_size": 0 00:15:11.265 } 00:15:11.265 ] 00:15:11.265 }' 00:15:11.265 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.265 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 [2024-12-12 16:11:37.812743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.525 BaseBdev2 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 [ 00:15:11.525 { 00:15:11.525 "name": "BaseBdev2", 00:15:11.525 "aliases": [ 00:15:11.525 "998ad8cc-5267-412c-a25f-b95ecb3c559f" 00:15:11.525 ], 00:15:11.525 "product_name": "Malloc disk", 00:15:11.525 "block_size": 512, 00:15:11.525 "num_blocks": 65536, 00:15:11.525 "uuid": "998ad8cc-5267-412c-a25f-b95ecb3c559f", 00:15:11.525 "assigned_rate_limits": { 00:15:11.525 "rw_ios_per_sec": 0, 00:15:11.525 "rw_mbytes_per_sec": 0, 00:15:11.525 "r_mbytes_per_sec": 0, 00:15:11.525 "w_mbytes_per_sec": 0 00:15:11.525 }, 00:15:11.525 "claimed": true, 00:15:11.525 "claim_type": "exclusive_write", 00:15:11.525 "zoned": false, 00:15:11.525 "supported_io_types": { 00:15:11.525 "read": true, 00:15:11.525 "write": true, 00:15:11.525 "unmap": true, 00:15:11.525 "flush": true, 00:15:11.525 "reset": true, 00:15:11.525 "nvme_admin": false, 00:15:11.525 "nvme_io": false, 00:15:11.525 "nvme_io_md": false, 00:15:11.525 "write_zeroes": true, 00:15:11.525 "zcopy": true, 00:15:11.525 "get_zone_info": false, 00:15:11.525 "zone_management": false, 00:15:11.525 "zone_append": false, 00:15:11.525 "compare": false, 00:15:11.525 "compare_and_write": false, 00:15:11.525 "abort": true, 00:15:11.525 "seek_hole": false, 00:15:11.525 "seek_data": false, 00:15:11.525 "copy": true, 00:15:11.525 "nvme_iov_md": false 00:15:11.525 }, 00:15:11.525 "memory_domains": [ 00:15:11.525 { 00:15:11.525 "dma_device_id": "system", 00:15:11.525 "dma_device_type": 1 00:15:11.525 }, 00:15:11.525 { 00:15:11.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.525 "dma_device_type": 2 00:15:11.525 } 00:15:11.525 ], 00:15:11.525 "driver_specific": {} 00:15:11.525 } 00:15:11.525 ] 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.525 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.784 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.784 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.784 "name": "Existed_Raid", 00:15:11.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.784 "strip_size_kb": 64, 00:15:11.784 "state": "configuring", 00:15:11.784 "raid_level": "raid5f", 00:15:11.784 "superblock": false, 00:15:11.784 "num_base_bdevs": 3, 00:15:11.785 "num_base_bdevs_discovered": 2, 00:15:11.785 "num_base_bdevs_operational": 3, 00:15:11.785 "base_bdevs_list": [ 00:15:11.785 { 00:15:11.785 "name": "BaseBdev1", 00:15:11.785 "uuid": "05a5a570-68cd-4531-8fcc-bcab3c73e67f", 00:15:11.785 "is_configured": true, 00:15:11.785 "data_offset": 0, 00:15:11.785 "data_size": 65536 00:15:11.785 }, 00:15:11.785 { 00:15:11.785 "name": "BaseBdev2", 00:15:11.785 "uuid": "998ad8cc-5267-412c-a25f-b95ecb3c559f", 00:15:11.785 "is_configured": true, 00:15:11.785 "data_offset": 0, 00:15:11.785 "data_size": 65536 00:15:11.785 }, 00:15:11.785 { 00:15:11.785 "name": "BaseBdev3", 00:15:11.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.785 "is_configured": false, 00:15:11.785 "data_offset": 0, 00:15:11.785 "data_size": 0 00:15:11.785 } 00:15:11.785 ] 00:15:11.785 }' 00:15:11.785 16:11:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.785 16:11:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.044 [2024-12-12 16:11:38.302094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.044 [2024-12-12 16:11:38.302185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:12.044 [2024-12-12 16:11:38.302204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:12.044 [2024-12-12 16:11:38.302536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:12.044 [2024-12-12 16:11:38.308638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:12.044 [2024-12-12 16:11:38.308747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:12.044 [2024-12-12 16:11:38.309119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.044 BaseBdev3 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.044 [ 00:15:12.044 { 00:15:12.044 "name": "BaseBdev3", 00:15:12.044 "aliases": [ 00:15:12.044 "0227fbba-7863-4563-97e7-9e7a7d5461b8" 00:15:12.044 ], 00:15:12.044 "product_name": "Malloc disk", 00:15:12.044 "block_size": 512, 00:15:12.044 "num_blocks": 65536, 00:15:12.044 "uuid": "0227fbba-7863-4563-97e7-9e7a7d5461b8", 00:15:12.044 "assigned_rate_limits": { 00:15:12.044 "rw_ios_per_sec": 0, 00:15:12.044 "rw_mbytes_per_sec": 0, 00:15:12.044 "r_mbytes_per_sec": 0, 00:15:12.044 "w_mbytes_per_sec": 0 00:15:12.044 }, 00:15:12.044 "claimed": true, 00:15:12.044 "claim_type": "exclusive_write", 00:15:12.044 "zoned": false, 00:15:12.044 "supported_io_types": { 00:15:12.044 "read": true, 00:15:12.044 "write": true, 00:15:12.044 "unmap": true, 00:15:12.044 "flush": true, 00:15:12.044 "reset": true, 00:15:12.044 "nvme_admin": false, 00:15:12.044 "nvme_io": false, 00:15:12.044 "nvme_io_md": false, 00:15:12.044 "write_zeroes": true, 00:15:12.044 "zcopy": true, 00:15:12.044 "get_zone_info": false, 00:15:12.044 "zone_management": false, 00:15:12.044 "zone_append": false, 00:15:12.044 "compare": false, 00:15:12.044 "compare_and_write": false, 00:15:12.044 "abort": true, 00:15:12.044 "seek_hole": false, 00:15:12.044 "seek_data": false, 00:15:12.044 "copy": true, 00:15:12.044 "nvme_iov_md": false 00:15:12.044 }, 00:15:12.044 "memory_domains": [ 00:15:12.044 { 00:15:12.044 "dma_device_id": "system", 00:15:12.044 "dma_device_type": 1 00:15:12.044 }, 00:15:12.044 { 00:15:12.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.044 "dma_device_type": 2 00:15:12.044 } 00:15:12.044 ], 00:15:12.044 "driver_specific": {} 00:15:12.044 } 00:15:12.044 ] 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.044 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.045 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.045 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.045 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.045 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.304 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.304 "name": "Existed_Raid", 00:15:12.304 "uuid": "7cbf2a5a-714b-4e80-ab58-9ed39ab40f23", 00:15:12.304 "strip_size_kb": 64, 00:15:12.304 "state": "online", 00:15:12.304 "raid_level": "raid5f", 00:15:12.304 "superblock": false, 00:15:12.304 "num_base_bdevs": 3, 00:15:12.304 "num_base_bdevs_discovered": 3, 00:15:12.304 "num_base_bdevs_operational": 3, 00:15:12.304 "base_bdevs_list": [ 00:15:12.304 { 00:15:12.304 "name": "BaseBdev1", 00:15:12.304 "uuid": "05a5a570-68cd-4531-8fcc-bcab3c73e67f", 00:15:12.304 "is_configured": true, 00:15:12.304 "data_offset": 0, 00:15:12.304 "data_size": 65536 00:15:12.304 }, 00:15:12.304 { 00:15:12.304 "name": "BaseBdev2", 00:15:12.304 "uuid": "998ad8cc-5267-412c-a25f-b95ecb3c559f", 00:15:12.304 "is_configured": true, 00:15:12.304 "data_offset": 0, 00:15:12.304 "data_size": 65536 00:15:12.304 }, 00:15:12.304 { 00:15:12.304 "name": "BaseBdev3", 00:15:12.304 "uuid": "0227fbba-7863-4563-97e7-9e7a7d5461b8", 00:15:12.304 "is_configured": true, 00:15:12.304 "data_offset": 0, 00:15:12.304 "data_size": 65536 00:15:12.304 } 00:15:12.304 ] 00:15:12.304 }' 00:15:12.304 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.304 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.564 [2024-12-12 16:11:38.792390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:12.564 "name": "Existed_Raid", 00:15:12.564 "aliases": [ 00:15:12.564 "7cbf2a5a-714b-4e80-ab58-9ed39ab40f23" 00:15:12.564 ], 00:15:12.564 "product_name": "Raid Volume", 00:15:12.564 "block_size": 512, 00:15:12.564 "num_blocks": 131072, 00:15:12.564 "uuid": "7cbf2a5a-714b-4e80-ab58-9ed39ab40f23", 00:15:12.564 "assigned_rate_limits": { 00:15:12.564 "rw_ios_per_sec": 0, 00:15:12.564 "rw_mbytes_per_sec": 0, 00:15:12.564 "r_mbytes_per_sec": 0, 00:15:12.564 "w_mbytes_per_sec": 0 00:15:12.564 }, 00:15:12.564 "claimed": false, 00:15:12.564 "zoned": false, 00:15:12.564 "supported_io_types": { 00:15:12.564 "read": true, 00:15:12.564 "write": true, 00:15:12.564 "unmap": false, 00:15:12.564 "flush": false, 00:15:12.564 "reset": true, 00:15:12.564 "nvme_admin": false, 00:15:12.564 "nvme_io": false, 00:15:12.564 "nvme_io_md": false, 00:15:12.564 "write_zeroes": true, 00:15:12.564 "zcopy": false, 00:15:12.564 "get_zone_info": false, 00:15:12.564 "zone_management": false, 00:15:12.564 "zone_append": false, 00:15:12.564 "compare": false, 00:15:12.564 "compare_and_write": false, 00:15:12.564 "abort": false, 00:15:12.564 "seek_hole": false, 00:15:12.564 "seek_data": false, 00:15:12.564 "copy": false, 00:15:12.564 "nvme_iov_md": false 00:15:12.564 }, 00:15:12.564 "driver_specific": { 00:15:12.564 "raid": { 00:15:12.564 "uuid": "7cbf2a5a-714b-4e80-ab58-9ed39ab40f23", 00:15:12.564 "strip_size_kb": 64, 00:15:12.564 "state": "online", 00:15:12.564 "raid_level": "raid5f", 00:15:12.564 "superblock": false, 00:15:12.564 "num_base_bdevs": 3, 00:15:12.564 "num_base_bdevs_discovered": 3, 00:15:12.564 "num_base_bdevs_operational": 3, 00:15:12.564 "base_bdevs_list": [ 00:15:12.564 { 00:15:12.564 "name": "BaseBdev1", 00:15:12.564 "uuid": "05a5a570-68cd-4531-8fcc-bcab3c73e67f", 00:15:12.564 "is_configured": true, 00:15:12.564 "data_offset": 0, 00:15:12.564 "data_size": 65536 00:15:12.564 }, 00:15:12.564 { 00:15:12.564 "name": "BaseBdev2", 00:15:12.564 "uuid": "998ad8cc-5267-412c-a25f-b95ecb3c559f", 00:15:12.564 "is_configured": true, 00:15:12.564 "data_offset": 0, 00:15:12.564 "data_size": 65536 00:15:12.564 }, 00:15:12.564 { 00:15:12.564 "name": "BaseBdev3", 00:15:12.564 "uuid": "0227fbba-7863-4563-97e7-9e7a7d5461b8", 00:15:12.564 "is_configured": true, 00:15:12.564 "data_offset": 0, 00:15:12.564 "data_size": 65536 00:15:12.564 } 00:15:12.564 ] 00:15:12.564 } 00:15:12.564 } 00:15:12.564 }' 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:12.564 BaseBdev2 00:15:12.564 BaseBdev3' 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.564 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.824 16:11:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.824 [2024-12-12 16:11:39.039758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.824 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.825 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.083 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.083 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.083 "name": "Existed_Raid", 00:15:13.083 "uuid": "7cbf2a5a-714b-4e80-ab58-9ed39ab40f23", 00:15:13.083 "strip_size_kb": 64, 00:15:13.083 "state": "online", 00:15:13.084 "raid_level": "raid5f", 00:15:13.084 "superblock": false, 00:15:13.084 "num_base_bdevs": 3, 00:15:13.084 "num_base_bdevs_discovered": 2, 00:15:13.084 "num_base_bdevs_operational": 2, 00:15:13.084 "base_bdevs_list": [ 00:15:13.084 { 00:15:13.084 "name": null, 00:15:13.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.084 "is_configured": false, 00:15:13.084 "data_offset": 0, 00:15:13.084 "data_size": 65536 00:15:13.084 }, 00:15:13.084 { 00:15:13.084 "name": "BaseBdev2", 00:15:13.084 "uuid": "998ad8cc-5267-412c-a25f-b95ecb3c559f", 00:15:13.084 "is_configured": true, 00:15:13.084 "data_offset": 0, 00:15:13.084 "data_size": 65536 00:15:13.084 }, 00:15:13.084 { 00:15:13.084 "name": "BaseBdev3", 00:15:13.084 "uuid": "0227fbba-7863-4563-97e7-9e7a7d5461b8", 00:15:13.084 "is_configured": true, 00:15:13.084 "data_offset": 0, 00:15:13.084 "data_size": 65536 00:15:13.084 } 00:15:13.084 ] 00:15:13.084 }' 00:15:13.084 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.084 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.343 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.343 [2024-12-12 16:11:39.630308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.343 [2024-12-12 16:11:39.630458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.603 [2024-12-12 16:11:39.747410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.603 [2024-12-12 16:11:39.807419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:13.603 [2024-12-12 16:11:39.807619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.603 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.863 16:11:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.863 BaseBdev2 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.863 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.863 [ 00:15:13.863 { 00:15:13.863 "name": "BaseBdev2", 00:15:13.863 "aliases": [ 00:15:13.863 "7fe0419f-d903-4d34-acdd-9b5b478ceffe" 00:15:13.863 ], 00:15:13.863 "product_name": "Malloc disk", 00:15:13.863 "block_size": 512, 00:15:13.863 "num_blocks": 65536, 00:15:13.863 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:13.863 "assigned_rate_limits": { 00:15:13.863 "rw_ios_per_sec": 0, 00:15:13.863 "rw_mbytes_per_sec": 0, 00:15:13.863 "r_mbytes_per_sec": 0, 00:15:13.863 "w_mbytes_per_sec": 0 00:15:13.863 }, 00:15:13.863 "claimed": false, 00:15:13.863 "zoned": false, 00:15:13.863 "supported_io_types": { 00:15:13.863 "read": true, 00:15:13.863 "write": true, 00:15:13.863 "unmap": true, 00:15:13.863 "flush": true, 00:15:13.863 "reset": true, 00:15:13.863 "nvme_admin": false, 00:15:13.863 "nvme_io": false, 00:15:13.863 "nvme_io_md": false, 00:15:13.863 "write_zeroes": true, 00:15:13.863 "zcopy": true, 00:15:13.863 "get_zone_info": false, 00:15:13.863 "zone_management": false, 00:15:13.863 "zone_append": false, 00:15:13.863 "compare": false, 00:15:13.864 "compare_and_write": false, 00:15:13.864 "abort": true, 00:15:13.864 "seek_hole": false, 00:15:13.864 "seek_data": false, 00:15:13.864 "copy": true, 00:15:13.864 "nvme_iov_md": false 00:15:13.864 }, 00:15:13.864 "memory_domains": [ 00:15:13.864 { 00:15:13.864 "dma_device_id": "system", 00:15:13.864 "dma_device_type": 1 00:15:13.864 }, 00:15:13.864 { 00:15:13.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.864 "dma_device_type": 2 00:15:13.864 } 00:15:13.864 ], 00:15:13.864 "driver_specific": {} 00:15:13.864 } 00:15:13.864 ] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.864 BaseBdev3 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.864 [ 00:15:13.864 { 00:15:13.864 "name": "BaseBdev3", 00:15:13.864 "aliases": [ 00:15:13.864 "0d56630e-3f9f-4c2f-bf94-0113e49a141a" 00:15:13.864 ], 00:15:13.864 "product_name": "Malloc disk", 00:15:13.864 "block_size": 512, 00:15:13.864 "num_blocks": 65536, 00:15:13.864 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:13.864 "assigned_rate_limits": { 00:15:13.864 "rw_ios_per_sec": 0, 00:15:13.864 "rw_mbytes_per_sec": 0, 00:15:13.864 "r_mbytes_per_sec": 0, 00:15:13.864 "w_mbytes_per_sec": 0 00:15:13.864 }, 00:15:13.864 "claimed": false, 00:15:13.864 "zoned": false, 00:15:13.864 "supported_io_types": { 00:15:13.864 "read": true, 00:15:13.864 "write": true, 00:15:13.864 "unmap": true, 00:15:13.864 "flush": true, 00:15:13.864 "reset": true, 00:15:13.864 "nvme_admin": false, 00:15:13.864 "nvme_io": false, 00:15:13.864 "nvme_io_md": false, 00:15:13.864 "write_zeroes": true, 00:15:13.864 "zcopy": true, 00:15:13.864 "get_zone_info": false, 00:15:13.864 "zone_management": false, 00:15:13.864 "zone_append": false, 00:15:13.864 "compare": false, 00:15:13.864 "compare_and_write": false, 00:15:13.864 "abort": true, 00:15:13.864 "seek_hole": false, 00:15:13.864 "seek_data": false, 00:15:13.864 "copy": true, 00:15:13.864 "nvme_iov_md": false 00:15:13.864 }, 00:15:13.864 "memory_domains": [ 00:15:13.864 { 00:15:13.864 "dma_device_id": "system", 00:15:13.864 "dma_device_type": 1 00:15:13.864 }, 00:15:13.864 { 00:15:13.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.864 "dma_device_type": 2 00:15:13.864 } 00:15:13.864 ], 00:15:13.864 "driver_specific": {} 00:15:13.864 } 00:15:13.864 ] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.864 [2024-12-12 16:11:40.166647] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.864 [2024-12-12 16:11:40.166804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.864 [2024-12-12 16:11:40.166860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.864 [2024-12-12 16:11:40.169159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.864 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.124 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.124 "name": "Existed_Raid", 00:15:14.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.124 "strip_size_kb": 64, 00:15:14.124 "state": "configuring", 00:15:14.124 "raid_level": "raid5f", 00:15:14.124 "superblock": false, 00:15:14.124 "num_base_bdevs": 3, 00:15:14.124 "num_base_bdevs_discovered": 2, 00:15:14.124 "num_base_bdevs_operational": 3, 00:15:14.124 "base_bdevs_list": [ 00:15:14.124 { 00:15:14.124 "name": "BaseBdev1", 00:15:14.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.124 "is_configured": false, 00:15:14.124 "data_offset": 0, 00:15:14.124 "data_size": 0 00:15:14.124 }, 00:15:14.124 { 00:15:14.124 "name": "BaseBdev2", 00:15:14.124 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:14.124 "is_configured": true, 00:15:14.124 "data_offset": 0, 00:15:14.124 "data_size": 65536 00:15:14.124 }, 00:15:14.124 { 00:15:14.124 "name": "BaseBdev3", 00:15:14.124 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:14.124 "is_configured": true, 00:15:14.124 "data_offset": 0, 00:15:14.124 "data_size": 65536 00:15:14.124 } 00:15:14.124 ] 00:15:14.124 }' 00:15:14.124 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.124 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.384 [2024-12-12 16:11:40.613988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.384 "name": "Existed_Raid", 00:15:14.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.384 "strip_size_kb": 64, 00:15:14.384 "state": "configuring", 00:15:14.384 "raid_level": "raid5f", 00:15:14.384 "superblock": false, 00:15:14.384 "num_base_bdevs": 3, 00:15:14.384 "num_base_bdevs_discovered": 1, 00:15:14.384 "num_base_bdevs_operational": 3, 00:15:14.384 "base_bdevs_list": [ 00:15:14.384 { 00:15:14.384 "name": "BaseBdev1", 00:15:14.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.384 "is_configured": false, 00:15:14.384 "data_offset": 0, 00:15:14.384 "data_size": 0 00:15:14.384 }, 00:15:14.384 { 00:15:14.384 "name": null, 00:15:14.384 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:14.384 "is_configured": false, 00:15:14.384 "data_offset": 0, 00:15:14.384 "data_size": 65536 00:15:14.384 }, 00:15:14.384 { 00:15:14.384 "name": "BaseBdev3", 00:15:14.384 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:14.384 "is_configured": true, 00:15:14.384 "data_offset": 0, 00:15:14.384 "data_size": 65536 00:15:14.384 } 00:15:14.384 ] 00:15:14.384 }' 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.384 16:11:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.953 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:14.953 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.953 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.953 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 [2024-12-12 16:11:41.140126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.954 BaseBdev1 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 [ 00:15:14.954 { 00:15:14.954 "name": "BaseBdev1", 00:15:14.954 "aliases": [ 00:15:14.954 "2aad843f-3381-4dee-ab56-56bc5d79de66" 00:15:14.954 ], 00:15:14.954 "product_name": "Malloc disk", 00:15:14.954 "block_size": 512, 00:15:14.954 "num_blocks": 65536, 00:15:14.954 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:14.954 "assigned_rate_limits": { 00:15:14.954 "rw_ios_per_sec": 0, 00:15:14.954 "rw_mbytes_per_sec": 0, 00:15:14.954 "r_mbytes_per_sec": 0, 00:15:14.954 "w_mbytes_per_sec": 0 00:15:14.954 }, 00:15:14.954 "claimed": true, 00:15:14.954 "claim_type": "exclusive_write", 00:15:14.954 "zoned": false, 00:15:14.954 "supported_io_types": { 00:15:14.954 "read": true, 00:15:14.954 "write": true, 00:15:14.954 "unmap": true, 00:15:14.954 "flush": true, 00:15:14.954 "reset": true, 00:15:14.954 "nvme_admin": false, 00:15:14.954 "nvme_io": false, 00:15:14.954 "nvme_io_md": false, 00:15:14.954 "write_zeroes": true, 00:15:14.954 "zcopy": true, 00:15:14.954 "get_zone_info": false, 00:15:14.954 "zone_management": false, 00:15:14.954 "zone_append": false, 00:15:14.954 "compare": false, 00:15:14.954 "compare_and_write": false, 00:15:14.954 "abort": true, 00:15:14.954 "seek_hole": false, 00:15:14.954 "seek_data": false, 00:15:14.954 "copy": true, 00:15:14.954 "nvme_iov_md": false 00:15:14.954 }, 00:15:14.954 "memory_domains": [ 00:15:14.954 { 00:15:14.954 "dma_device_id": "system", 00:15:14.954 "dma_device_type": 1 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.954 "dma_device_type": 2 00:15:14.954 } 00:15:14.954 ], 00:15:14.954 "driver_specific": {} 00:15:14.954 } 00:15:14.954 ] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.954 "name": "Existed_Raid", 00:15:14.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.954 "strip_size_kb": 64, 00:15:14.954 "state": "configuring", 00:15:14.954 "raid_level": "raid5f", 00:15:14.954 "superblock": false, 00:15:14.954 "num_base_bdevs": 3, 00:15:14.954 "num_base_bdevs_discovered": 2, 00:15:14.954 "num_base_bdevs_operational": 3, 00:15:14.954 "base_bdevs_list": [ 00:15:14.954 { 00:15:14.954 "name": "BaseBdev1", 00:15:14.954 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:14.954 "is_configured": true, 00:15:14.954 "data_offset": 0, 00:15:14.954 "data_size": 65536 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "name": null, 00:15:14.954 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:14.954 "is_configured": false, 00:15:14.954 "data_offset": 0, 00:15:14.954 "data_size": 65536 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "name": "BaseBdev3", 00:15:14.954 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:14.954 "is_configured": true, 00:15:14.954 "data_offset": 0, 00:15:14.954 "data_size": 65536 00:15:14.954 } 00:15:14.954 ] 00:15:14.954 }' 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.954 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.523 [2024-12-12 16:11:41.703481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.523 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.523 "name": "Existed_Raid", 00:15:15.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.523 "strip_size_kb": 64, 00:15:15.523 "state": "configuring", 00:15:15.523 "raid_level": "raid5f", 00:15:15.523 "superblock": false, 00:15:15.523 "num_base_bdevs": 3, 00:15:15.523 "num_base_bdevs_discovered": 1, 00:15:15.523 "num_base_bdevs_operational": 3, 00:15:15.523 "base_bdevs_list": [ 00:15:15.523 { 00:15:15.523 "name": "BaseBdev1", 00:15:15.524 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:15.524 "is_configured": true, 00:15:15.524 "data_offset": 0, 00:15:15.524 "data_size": 65536 00:15:15.524 }, 00:15:15.524 { 00:15:15.524 "name": null, 00:15:15.524 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:15.524 "is_configured": false, 00:15:15.524 "data_offset": 0, 00:15:15.524 "data_size": 65536 00:15:15.524 }, 00:15:15.524 { 00:15:15.524 "name": null, 00:15:15.524 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:15.524 "is_configured": false, 00:15:15.524 "data_offset": 0, 00:15:15.524 "data_size": 65536 00:15:15.524 } 00:15:15.524 ] 00:15:15.524 }' 00:15:15.524 16:11:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.524 16:11:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.783 [2024-12-12 16:11:42.126867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.783 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.043 "name": "Existed_Raid", 00:15:16.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.043 "strip_size_kb": 64, 00:15:16.043 "state": "configuring", 00:15:16.043 "raid_level": "raid5f", 00:15:16.043 "superblock": false, 00:15:16.043 "num_base_bdevs": 3, 00:15:16.043 "num_base_bdevs_discovered": 2, 00:15:16.043 "num_base_bdevs_operational": 3, 00:15:16.043 "base_bdevs_list": [ 00:15:16.043 { 00:15:16.043 "name": "BaseBdev1", 00:15:16.043 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:16.043 "is_configured": true, 00:15:16.043 "data_offset": 0, 00:15:16.043 "data_size": 65536 00:15:16.043 }, 00:15:16.043 { 00:15:16.043 "name": null, 00:15:16.043 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:16.043 "is_configured": false, 00:15:16.043 "data_offset": 0, 00:15:16.043 "data_size": 65536 00:15:16.043 }, 00:15:16.043 { 00:15:16.043 "name": "BaseBdev3", 00:15:16.043 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:16.043 "is_configured": true, 00:15:16.043 "data_offset": 0, 00:15:16.043 "data_size": 65536 00:15:16.043 } 00:15:16.043 ] 00:15:16.043 }' 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.043 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.303 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.303 [2024-12-12 16:11:42.590091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.563 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.563 "name": "Existed_Raid", 00:15:16.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.563 "strip_size_kb": 64, 00:15:16.563 "state": "configuring", 00:15:16.563 "raid_level": "raid5f", 00:15:16.563 "superblock": false, 00:15:16.563 "num_base_bdevs": 3, 00:15:16.563 "num_base_bdevs_discovered": 1, 00:15:16.563 "num_base_bdevs_operational": 3, 00:15:16.563 "base_bdevs_list": [ 00:15:16.563 { 00:15:16.563 "name": null, 00:15:16.563 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:16.563 "is_configured": false, 00:15:16.563 "data_offset": 0, 00:15:16.563 "data_size": 65536 00:15:16.563 }, 00:15:16.563 { 00:15:16.563 "name": null, 00:15:16.563 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:16.563 "is_configured": false, 00:15:16.564 "data_offset": 0, 00:15:16.564 "data_size": 65536 00:15:16.564 }, 00:15:16.564 { 00:15:16.564 "name": "BaseBdev3", 00:15:16.564 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:16.564 "is_configured": true, 00:15:16.564 "data_offset": 0, 00:15:16.564 "data_size": 65536 00:15:16.564 } 00:15:16.564 ] 00:15:16.564 }' 00:15:16.564 16:11:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.564 16:11:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.824 [2024-12-12 16:11:43.140257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.824 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.085 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.085 "name": "Existed_Raid", 00:15:17.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.085 "strip_size_kb": 64, 00:15:17.085 "state": "configuring", 00:15:17.085 "raid_level": "raid5f", 00:15:17.085 "superblock": false, 00:15:17.085 "num_base_bdevs": 3, 00:15:17.085 "num_base_bdevs_discovered": 2, 00:15:17.085 "num_base_bdevs_operational": 3, 00:15:17.085 "base_bdevs_list": [ 00:15:17.085 { 00:15:17.085 "name": null, 00:15:17.085 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:17.085 "is_configured": false, 00:15:17.085 "data_offset": 0, 00:15:17.085 "data_size": 65536 00:15:17.085 }, 00:15:17.085 { 00:15:17.085 "name": "BaseBdev2", 00:15:17.085 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:17.085 "is_configured": true, 00:15:17.085 "data_offset": 0, 00:15:17.085 "data_size": 65536 00:15:17.085 }, 00:15:17.085 { 00:15:17.085 "name": "BaseBdev3", 00:15:17.085 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:17.085 "is_configured": true, 00:15:17.085 "data_offset": 0, 00:15:17.085 "data_size": 65536 00:15:17.085 } 00:15:17.085 ] 00:15:17.085 }' 00:15:17.085 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.085 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2aad843f-3381-4dee-ab56-56bc5d79de66 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.345 [2024-12-12 16:11:43.686391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:17.345 [2024-12-12 16:11:43.686482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:17.345 [2024-12-12 16:11:43.686494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:17.345 [2024-12-12 16:11:43.686790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:17.345 [2024-12-12 16:11:43.692383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:17.345 [2024-12-12 16:11:43.692427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:17.345 [2024-12-12 16:11:43.692856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.345 NewBaseBdev 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:17.345 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.605 [ 00:15:17.605 { 00:15:17.605 "name": "NewBaseBdev", 00:15:17.605 "aliases": [ 00:15:17.605 "2aad843f-3381-4dee-ab56-56bc5d79de66" 00:15:17.605 ], 00:15:17.605 "product_name": "Malloc disk", 00:15:17.605 "block_size": 512, 00:15:17.605 "num_blocks": 65536, 00:15:17.605 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:17.605 "assigned_rate_limits": { 00:15:17.605 "rw_ios_per_sec": 0, 00:15:17.605 "rw_mbytes_per_sec": 0, 00:15:17.605 "r_mbytes_per_sec": 0, 00:15:17.605 "w_mbytes_per_sec": 0 00:15:17.605 }, 00:15:17.605 "claimed": true, 00:15:17.605 "claim_type": "exclusive_write", 00:15:17.605 "zoned": false, 00:15:17.605 "supported_io_types": { 00:15:17.605 "read": true, 00:15:17.605 "write": true, 00:15:17.605 "unmap": true, 00:15:17.605 "flush": true, 00:15:17.605 "reset": true, 00:15:17.605 "nvme_admin": false, 00:15:17.605 "nvme_io": false, 00:15:17.605 "nvme_io_md": false, 00:15:17.605 "write_zeroes": true, 00:15:17.605 "zcopy": true, 00:15:17.605 "get_zone_info": false, 00:15:17.605 "zone_management": false, 00:15:17.605 "zone_append": false, 00:15:17.605 "compare": false, 00:15:17.605 "compare_and_write": false, 00:15:17.605 "abort": true, 00:15:17.605 "seek_hole": false, 00:15:17.605 "seek_data": false, 00:15:17.605 "copy": true, 00:15:17.605 "nvme_iov_md": false 00:15:17.605 }, 00:15:17.605 "memory_domains": [ 00:15:17.605 { 00:15:17.605 "dma_device_id": "system", 00:15:17.605 "dma_device_type": 1 00:15:17.605 }, 00:15:17.605 { 00:15:17.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.605 "dma_device_type": 2 00:15:17.605 } 00:15:17.605 ], 00:15:17.605 "driver_specific": {} 00:15:17.605 } 00:15:17.605 ] 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.605 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.605 "name": "Existed_Raid", 00:15:17.605 "uuid": "2719e339-f6bc-4695-8862-f8865d64d52f", 00:15:17.605 "strip_size_kb": 64, 00:15:17.605 "state": "online", 00:15:17.605 "raid_level": "raid5f", 00:15:17.605 "superblock": false, 00:15:17.605 "num_base_bdevs": 3, 00:15:17.605 "num_base_bdevs_discovered": 3, 00:15:17.605 "num_base_bdevs_operational": 3, 00:15:17.606 "base_bdevs_list": [ 00:15:17.606 { 00:15:17.606 "name": "NewBaseBdev", 00:15:17.606 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:17.606 "is_configured": true, 00:15:17.606 "data_offset": 0, 00:15:17.606 "data_size": 65536 00:15:17.606 }, 00:15:17.606 { 00:15:17.606 "name": "BaseBdev2", 00:15:17.606 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:17.606 "is_configured": true, 00:15:17.606 "data_offset": 0, 00:15:17.606 "data_size": 65536 00:15:17.606 }, 00:15:17.606 { 00:15:17.606 "name": "BaseBdev3", 00:15:17.606 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:17.606 "is_configured": true, 00:15:17.606 "data_offset": 0, 00:15:17.606 "data_size": 65536 00:15:17.606 } 00:15:17.606 ] 00:15:17.606 }' 00:15:17.606 16:11:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.606 16:11:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.866 [2024-12-12 16:11:44.167841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.866 "name": "Existed_Raid", 00:15:17.866 "aliases": [ 00:15:17.866 "2719e339-f6bc-4695-8862-f8865d64d52f" 00:15:17.866 ], 00:15:17.866 "product_name": "Raid Volume", 00:15:17.866 "block_size": 512, 00:15:17.866 "num_blocks": 131072, 00:15:17.866 "uuid": "2719e339-f6bc-4695-8862-f8865d64d52f", 00:15:17.866 "assigned_rate_limits": { 00:15:17.866 "rw_ios_per_sec": 0, 00:15:17.866 "rw_mbytes_per_sec": 0, 00:15:17.866 "r_mbytes_per_sec": 0, 00:15:17.866 "w_mbytes_per_sec": 0 00:15:17.866 }, 00:15:17.866 "claimed": false, 00:15:17.866 "zoned": false, 00:15:17.866 "supported_io_types": { 00:15:17.866 "read": true, 00:15:17.866 "write": true, 00:15:17.866 "unmap": false, 00:15:17.866 "flush": false, 00:15:17.866 "reset": true, 00:15:17.866 "nvme_admin": false, 00:15:17.866 "nvme_io": false, 00:15:17.866 "nvme_io_md": false, 00:15:17.866 "write_zeroes": true, 00:15:17.866 "zcopy": false, 00:15:17.866 "get_zone_info": false, 00:15:17.866 "zone_management": false, 00:15:17.866 "zone_append": false, 00:15:17.866 "compare": false, 00:15:17.866 "compare_and_write": false, 00:15:17.866 "abort": false, 00:15:17.866 "seek_hole": false, 00:15:17.866 "seek_data": false, 00:15:17.866 "copy": false, 00:15:17.866 "nvme_iov_md": false 00:15:17.866 }, 00:15:17.866 "driver_specific": { 00:15:17.866 "raid": { 00:15:17.866 "uuid": "2719e339-f6bc-4695-8862-f8865d64d52f", 00:15:17.866 "strip_size_kb": 64, 00:15:17.866 "state": "online", 00:15:17.866 "raid_level": "raid5f", 00:15:17.866 "superblock": false, 00:15:17.866 "num_base_bdevs": 3, 00:15:17.866 "num_base_bdevs_discovered": 3, 00:15:17.866 "num_base_bdevs_operational": 3, 00:15:17.866 "base_bdevs_list": [ 00:15:17.866 { 00:15:17.866 "name": "NewBaseBdev", 00:15:17.866 "uuid": "2aad843f-3381-4dee-ab56-56bc5d79de66", 00:15:17.866 "is_configured": true, 00:15:17.866 "data_offset": 0, 00:15:17.866 "data_size": 65536 00:15:17.866 }, 00:15:17.866 { 00:15:17.866 "name": "BaseBdev2", 00:15:17.866 "uuid": "7fe0419f-d903-4d34-acdd-9b5b478ceffe", 00:15:17.866 "is_configured": true, 00:15:17.866 "data_offset": 0, 00:15:17.866 "data_size": 65536 00:15:17.866 }, 00:15:17.866 { 00:15:17.866 "name": "BaseBdev3", 00:15:17.866 "uuid": "0d56630e-3f9f-4c2f-bf94-0113e49a141a", 00:15:17.866 "is_configured": true, 00:15:17.866 "data_offset": 0, 00:15:17.866 "data_size": 65536 00:15:17.866 } 00:15:17.866 ] 00:15:17.866 } 00:15:17.866 } 00:15:17.866 }' 00:15:17.866 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:18.126 BaseBdev2 00:15:18.126 BaseBdev3' 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.126 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.127 [2024-12-12 16:11:44.415149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.127 [2024-12-12 16:11:44.415203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.127 [2024-12-12 16:11:44.415312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.127 [2024-12-12 16:11:44.415657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.127 [2024-12-12 16:11:44.415677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81969 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81969 ']' 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81969 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81969 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81969' 00:15:18.127 killing process with pid 81969 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 81969 00:15:18.127 [2024-12-12 16:11:44.462451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.127 16:11:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 81969 00:15:18.698 [2024-12-12 16:11:44.845703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:20.079 00:15:20.079 real 0m10.899s 00:15:20.079 user 0m16.900s 00:15:20.079 sys 0m1.945s 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.079 ************************************ 00:15:20.079 END TEST raid5f_state_function_test 00:15:20.079 ************************************ 00:15:20.079 16:11:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:20.079 16:11:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:20.079 16:11:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.079 16:11:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.079 ************************************ 00:15:20.079 START TEST raid5f_state_function_test_sb 00:15:20.079 ************************************ 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82590 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:20.079 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82590' 00:15:20.080 Process raid pid: 82590 00:15:20.080 16:11:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82590 00:15:20.080 16:11:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82590 ']' 00:15:20.080 16:11:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.080 16:11:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.080 16:11:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.080 16:11:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.080 16:11:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.080 [2024-12-12 16:11:46.385410] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:20.080 [2024-12-12 16:11:46.385628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.340 [2024-12-12 16:11:46.553351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.599 [2024-12-12 16:11:46.707784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.857 [2024-12-12 16:11:46.975384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.857 [2024-12-12 16:11:46.975582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.117 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.117 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:21.117 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.117 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.117 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.117 [2024-12-12 16:11:47.216833] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.117 [2024-12-12 16:11:47.217027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.117 [2024-12-12 16:11:47.217085] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.117 [2024-12-12 16:11:47.217126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.117 [2024-12-12 16:11:47.217163] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.117 [2024-12-12 16:11:47.217195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.118 "name": "Existed_Raid", 00:15:21.118 "uuid": "5d21c3e9-18f5-4491-96c3-8ba88ead82c5", 00:15:21.118 "strip_size_kb": 64, 00:15:21.118 "state": "configuring", 00:15:21.118 "raid_level": "raid5f", 00:15:21.118 "superblock": true, 00:15:21.118 "num_base_bdevs": 3, 00:15:21.118 "num_base_bdevs_discovered": 0, 00:15:21.118 "num_base_bdevs_operational": 3, 00:15:21.118 "base_bdevs_list": [ 00:15:21.118 { 00:15:21.118 "name": "BaseBdev1", 00:15:21.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.118 "is_configured": false, 00:15:21.118 "data_offset": 0, 00:15:21.118 "data_size": 0 00:15:21.118 }, 00:15:21.118 { 00:15:21.118 "name": "BaseBdev2", 00:15:21.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.118 "is_configured": false, 00:15:21.118 "data_offset": 0, 00:15:21.118 "data_size": 0 00:15:21.118 }, 00:15:21.118 { 00:15:21.118 "name": "BaseBdev3", 00:15:21.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.118 "is_configured": false, 00:15:21.118 "data_offset": 0, 00:15:21.118 "data_size": 0 00:15:21.118 } 00:15:21.118 ] 00:15:21.118 }' 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.118 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.377 [2024-12-12 16:11:47.652135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.377 [2024-12-12 16:11:47.652300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.377 [2024-12-12 16:11:47.660092] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.377 [2024-12-12 16:11:47.660205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.377 [2024-12-12 16:11:47.660241] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.377 [2024-12-12 16:11:47.660272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.377 [2024-12-12 16:11:47.660296] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.377 [2024-12-12 16:11:47.660327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.377 [2024-12-12 16:11:47.712500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.377 BaseBdev1 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.377 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.636 [ 00:15:21.636 { 00:15:21.636 "name": "BaseBdev1", 00:15:21.636 "aliases": [ 00:15:21.636 "7b0c122a-31a4-4a5f-9467-dcbde0b8805c" 00:15:21.636 ], 00:15:21.636 "product_name": "Malloc disk", 00:15:21.636 "block_size": 512, 00:15:21.636 "num_blocks": 65536, 00:15:21.636 "uuid": "7b0c122a-31a4-4a5f-9467-dcbde0b8805c", 00:15:21.636 "assigned_rate_limits": { 00:15:21.636 "rw_ios_per_sec": 0, 00:15:21.636 "rw_mbytes_per_sec": 0, 00:15:21.636 "r_mbytes_per_sec": 0, 00:15:21.636 "w_mbytes_per_sec": 0 00:15:21.636 }, 00:15:21.636 "claimed": true, 00:15:21.636 "claim_type": "exclusive_write", 00:15:21.636 "zoned": false, 00:15:21.636 "supported_io_types": { 00:15:21.636 "read": true, 00:15:21.636 "write": true, 00:15:21.636 "unmap": true, 00:15:21.636 "flush": true, 00:15:21.636 "reset": true, 00:15:21.636 "nvme_admin": false, 00:15:21.636 "nvme_io": false, 00:15:21.636 "nvme_io_md": false, 00:15:21.636 "write_zeroes": true, 00:15:21.636 "zcopy": true, 00:15:21.636 "get_zone_info": false, 00:15:21.636 "zone_management": false, 00:15:21.636 "zone_append": false, 00:15:21.636 "compare": false, 00:15:21.636 "compare_and_write": false, 00:15:21.636 "abort": true, 00:15:21.637 "seek_hole": false, 00:15:21.637 "seek_data": false, 00:15:21.637 "copy": true, 00:15:21.637 "nvme_iov_md": false 00:15:21.637 }, 00:15:21.637 "memory_domains": [ 00:15:21.637 { 00:15:21.637 "dma_device_id": "system", 00:15:21.637 "dma_device_type": 1 00:15:21.637 }, 00:15:21.637 { 00:15:21.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.637 "dma_device_type": 2 00:15:21.637 } 00:15:21.637 ], 00:15:21.637 "driver_specific": {} 00:15:21.637 } 00:15:21.637 ] 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.637 "name": "Existed_Raid", 00:15:21.637 "uuid": "d6b3569b-3f17-4db7-84e0-beefa102aaf1", 00:15:21.637 "strip_size_kb": 64, 00:15:21.637 "state": "configuring", 00:15:21.637 "raid_level": "raid5f", 00:15:21.637 "superblock": true, 00:15:21.637 "num_base_bdevs": 3, 00:15:21.637 "num_base_bdevs_discovered": 1, 00:15:21.637 "num_base_bdevs_operational": 3, 00:15:21.637 "base_bdevs_list": [ 00:15:21.637 { 00:15:21.637 "name": "BaseBdev1", 00:15:21.637 "uuid": "7b0c122a-31a4-4a5f-9467-dcbde0b8805c", 00:15:21.637 "is_configured": true, 00:15:21.637 "data_offset": 2048, 00:15:21.637 "data_size": 63488 00:15:21.637 }, 00:15:21.637 { 00:15:21.637 "name": "BaseBdev2", 00:15:21.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.637 "is_configured": false, 00:15:21.637 "data_offset": 0, 00:15:21.637 "data_size": 0 00:15:21.637 }, 00:15:21.637 { 00:15:21.637 "name": "BaseBdev3", 00:15:21.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.637 "is_configured": false, 00:15:21.637 "data_offset": 0, 00:15:21.637 "data_size": 0 00:15:21.637 } 00:15:21.637 ] 00:15:21.637 }' 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.637 16:11:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.897 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.897 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.897 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.897 [2024-12-12 16:11:48.140019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.898 [2024-12-12 16:11:48.140124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.898 [2024-12-12 16:11:48.152013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.898 [2024-12-12 16:11:48.154535] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.898 [2024-12-12 16:11:48.154641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.898 [2024-12-12 16:11:48.154681] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.898 [2024-12-12 16:11:48.154714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.898 "name": "Existed_Raid", 00:15:21.898 "uuid": "49fce4e8-f8b7-4ca0-9ef4-54baefde3452", 00:15:21.898 "strip_size_kb": 64, 00:15:21.898 "state": "configuring", 00:15:21.898 "raid_level": "raid5f", 00:15:21.898 "superblock": true, 00:15:21.898 "num_base_bdevs": 3, 00:15:21.898 "num_base_bdevs_discovered": 1, 00:15:21.898 "num_base_bdevs_operational": 3, 00:15:21.898 "base_bdevs_list": [ 00:15:21.898 { 00:15:21.898 "name": "BaseBdev1", 00:15:21.898 "uuid": "7b0c122a-31a4-4a5f-9467-dcbde0b8805c", 00:15:21.898 "is_configured": true, 00:15:21.898 "data_offset": 2048, 00:15:21.898 "data_size": 63488 00:15:21.898 }, 00:15:21.898 { 00:15:21.898 "name": "BaseBdev2", 00:15:21.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.898 "is_configured": false, 00:15:21.898 "data_offset": 0, 00:15:21.898 "data_size": 0 00:15:21.898 }, 00:15:21.898 { 00:15:21.898 "name": "BaseBdev3", 00:15:21.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.898 "is_configured": false, 00:15:21.898 "data_offset": 0, 00:15:21.898 "data_size": 0 00:15:21.898 } 00:15:21.898 ] 00:15:21.898 }' 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.898 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.467 [2024-12-12 16:11:48.615099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.467 BaseBdev2 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.467 [ 00:15:22.467 { 00:15:22.467 "name": "BaseBdev2", 00:15:22.467 "aliases": [ 00:15:22.467 "06c01417-853c-4b96-9881-053e148d15ad" 00:15:22.467 ], 00:15:22.467 "product_name": "Malloc disk", 00:15:22.467 "block_size": 512, 00:15:22.467 "num_blocks": 65536, 00:15:22.467 "uuid": "06c01417-853c-4b96-9881-053e148d15ad", 00:15:22.467 "assigned_rate_limits": { 00:15:22.467 "rw_ios_per_sec": 0, 00:15:22.467 "rw_mbytes_per_sec": 0, 00:15:22.467 "r_mbytes_per_sec": 0, 00:15:22.467 "w_mbytes_per_sec": 0 00:15:22.467 }, 00:15:22.467 "claimed": true, 00:15:22.467 "claim_type": "exclusive_write", 00:15:22.467 "zoned": false, 00:15:22.467 "supported_io_types": { 00:15:22.467 "read": true, 00:15:22.467 "write": true, 00:15:22.467 "unmap": true, 00:15:22.467 "flush": true, 00:15:22.467 "reset": true, 00:15:22.467 "nvme_admin": false, 00:15:22.467 "nvme_io": false, 00:15:22.467 "nvme_io_md": false, 00:15:22.467 "write_zeroes": true, 00:15:22.467 "zcopy": true, 00:15:22.467 "get_zone_info": false, 00:15:22.467 "zone_management": false, 00:15:22.467 "zone_append": false, 00:15:22.467 "compare": false, 00:15:22.467 "compare_and_write": false, 00:15:22.467 "abort": true, 00:15:22.467 "seek_hole": false, 00:15:22.467 "seek_data": false, 00:15:22.467 "copy": true, 00:15:22.467 "nvme_iov_md": false 00:15:22.467 }, 00:15:22.467 "memory_domains": [ 00:15:22.467 { 00:15:22.467 "dma_device_id": "system", 00:15:22.467 "dma_device_type": 1 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.467 "dma_device_type": 2 00:15:22.467 } 00:15:22.467 ], 00:15:22.467 "driver_specific": {} 00:15:22.467 } 00:15:22.467 ] 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.467 "name": "Existed_Raid", 00:15:22.467 "uuid": "49fce4e8-f8b7-4ca0-9ef4-54baefde3452", 00:15:22.467 "strip_size_kb": 64, 00:15:22.467 "state": "configuring", 00:15:22.467 "raid_level": "raid5f", 00:15:22.467 "superblock": true, 00:15:22.467 "num_base_bdevs": 3, 00:15:22.467 "num_base_bdevs_discovered": 2, 00:15:22.467 "num_base_bdevs_operational": 3, 00:15:22.467 "base_bdevs_list": [ 00:15:22.467 { 00:15:22.467 "name": "BaseBdev1", 00:15:22.467 "uuid": "7b0c122a-31a4-4a5f-9467-dcbde0b8805c", 00:15:22.467 "is_configured": true, 00:15:22.467 "data_offset": 2048, 00:15:22.467 "data_size": 63488 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "name": "BaseBdev2", 00:15:22.467 "uuid": "06c01417-853c-4b96-9881-053e148d15ad", 00:15:22.467 "is_configured": true, 00:15:22.467 "data_offset": 2048, 00:15:22.467 "data_size": 63488 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "name": "BaseBdev3", 00:15:22.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.467 "is_configured": false, 00:15:22.467 "data_offset": 0, 00:15:22.467 "data_size": 0 00:15:22.467 } 00:15:22.467 ] 00:15:22.467 }' 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.467 16:11:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.774 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.774 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.774 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.048 [2024-12-12 16:11:49.152373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.048 [2024-12-12 16:11:49.152910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:23.048 [2024-12-12 16:11:49.152985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.048 [2024-12-12 16:11:49.153347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:23.048 BaseBdev3 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.048 [2024-12-12 16:11:49.159642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:23.048 [2024-12-12 16:11:49.159722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:23.048 [2024-12-12 16:11:49.159980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.048 [ 00:15:23.048 { 00:15:23.048 "name": "BaseBdev3", 00:15:23.048 "aliases": [ 00:15:23.048 "c683cc38-62ae-454f-a060-779ea51e0b21" 00:15:23.048 ], 00:15:23.048 "product_name": "Malloc disk", 00:15:23.048 "block_size": 512, 00:15:23.048 "num_blocks": 65536, 00:15:23.048 "uuid": "c683cc38-62ae-454f-a060-779ea51e0b21", 00:15:23.048 "assigned_rate_limits": { 00:15:23.048 "rw_ios_per_sec": 0, 00:15:23.048 "rw_mbytes_per_sec": 0, 00:15:23.048 "r_mbytes_per_sec": 0, 00:15:23.048 "w_mbytes_per_sec": 0 00:15:23.048 }, 00:15:23.048 "claimed": true, 00:15:23.048 "claim_type": "exclusive_write", 00:15:23.048 "zoned": false, 00:15:23.048 "supported_io_types": { 00:15:23.048 "read": true, 00:15:23.048 "write": true, 00:15:23.048 "unmap": true, 00:15:23.048 "flush": true, 00:15:23.048 "reset": true, 00:15:23.048 "nvme_admin": false, 00:15:23.048 "nvme_io": false, 00:15:23.048 "nvme_io_md": false, 00:15:23.048 "write_zeroes": true, 00:15:23.048 "zcopy": true, 00:15:23.048 "get_zone_info": false, 00:15:23.048 "zone_management": false, 00:15:23.048 "zone_append": false, 00:15:23.048 "compare": false, 00:15:23.048 "compare_and_write": false, 00:15:23.048 "abort": true, 00:15:23.048 "seek_hole": false, 00:15:23.048 "seek_data": false, 00:15:23.048 "copy": true, 00:15:23.048 "nvme_iov_md": false 00:15:23.048 }, 00:15:23.048 "memory_domains": [ 00:15:23.048 { 00:15:23.048 "dma_device_id": "system", 00:15:23.048 "dma_device_type": 1 00:15:23.048 }, 00:15:23.048 { 00:15:23.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.048 "dma_device_type": 2 00:15:23.048 } 00:15:23.048 ], 00:15:23.048 "driver_specific": {} 00:15:23.048 } 00:15:23.048 ] 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.048 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.049 "name": "Existed_Raid", 00:15:23.049 "uuid": "49fce4e8-f8b7-4ca0-9ef4-54baefde3452", 00:15:23.049 "strip_size_kb": 64, 00:15:23.049 "state": "online", 00:15:23.049 "raid_level": "raid5f", 00:15:23.049 "superblock": true, 00:15:23.049 "num_base_bdevs": 3, 00:15:23.049 "num_base_bdevs_discovered": 3, 00:15:23.049 "num_base_bdevs_operational": 3, 00:15:23.049 "base_bdevs_list": [ 00:15:23.049 { 00:15:23.049 "name": "BaseBdev1", 00:15:23.049 "uuid": "7b0c122a-31a4-4a5f-9467-dcbde0b8805c", 00:15:23.049 "is_configured": true, 00:15:23.049 "data_offset": 2048, 00:15:23.049 "data_size": 63488 00:15:23.049 }, 00:15:23.049 { 00:15:23.049 "name": "BaseBdev2", 00:15:23.049 "uuid": "06c01417-853c-4b96-9881-053e148d15ad", 00:15:23.049 "is_configured": true, 00:15:23.049 "data_offset": 2048, 00:15:23.049 "data_size": 63488 00:15:23.049 }, 00:15:23.049 { 00:15:23.049 "name": "BaseBdev3", 00:15:23.049 "uuid": "c683cc38-62ae-454f-a060-779ea51e0b21", 00:15:23.049 "is_configured": true, 00:15:23.049 "data_offset": 2048, 00:15:23.049 "data_size": 63488 00:15:23.049 } 00:15:23.049 ] 00:15:23.049 }' 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.049 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.618 [2024-12-12 16:11:49.683439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.618 "name": "Existed_Raid", 00:15:23.618 "aliases": [ 00:15:23.618 "49fce4e8-f8b7-4ca0-9ef4-54baefde3452" 00:15:23.618 ], 00:15:23.618 "product_name": "Raid Volume", 00:15:23.618 "block_size": 512, 00:15:23.618 "num_blocks": 126976, 00:15:23.618 "uuid": "49fce4e8-f8b7-4ca0-9ef4-54baefde3452", 00:15:23.618 "assigned_rate_limits": { 00:15:23.618 "rw_ios_per_sec": 0, 00:15:23.618 "rw_mbytes_per_sec": 0, 00:15:23.618 "r_mbytes_per_sec": 0, 00:15:23.618 "w_mbytes_per_sec": 0 00:15:23.618 }, 00:15:23.618 "claimed": false, 00:15:23.618 "zoned": false, 00:15:23.618 "supported_io_types": { 00:15:23.618 "read": true, 00:15:23.618 "write": true, 00:15:23.618 "unmap": false, 00:15:23.618 "flush": false, 00:15:23.618 "reset": true, 00:15:23.618 "nvme_admin": false, 00:15:23.618 "nvme_io": false, 00:15:23.618 "nvme_io_md": false, 00:15:23.618 "write_zeroes": true, 00:15:23.618 "zcopy": false, 00:15:23.618 "get_zone_info": false, 00:15:23.618 "zone_management": false, 00:15:23.618 "zone_append": false, 00:15:23.618 "compare": false, 00:15:23.618 "compare_and_write": false, 00:15:23.618 "abort": false, 00:15:23.618 "seek_hole": false, 00:15:23.618 "seek_data": false, 00:15:23.618 "copy": false, 00:15:23.618 "nvme_iov_md": false 00:15:23.618 }, 00:15:23.618 "driver_specific": { 00:15:23.618 "raid": { 00:15:23.618 "uuid": "49fce4e8-f8b7-4ca0-9ef4-54baefde3452", 00:15:23.618 "strip_size_kb": 64, 00:15:23.618 "state": "online", 00:15:23.618 "raid_level": "raid5f", 00:15:23.618 "superblock": true, 00:15:23.618 "num_base_bdevs": 3, 00:15:23.618 "num_base_bdevs_discovered": 3, 00:15:23.618 "num_base_bdevs_operational": 3, 00:15:23.618 "base_bdevs_list": [ 00:15:23.618 { 00:15:23.618 "name": "BaseBdev1", 00:15:23.618 "uuid": "7b0c122a-31a4-4a5f-9467-dcbde0b8805c", 00:15:23.618 "is_configured": true, 00:15:23.618 "data_offset": 2048, 00:15:23.618 "data_size": 63488 00:15:23.618 }, 00:15:23.618 { 00:15:23.618 "name": "BaseBdev2", 00:15:23.618 "uuid": "06c01417-853c-4b96-9881-053e148d15ad", 00:15:23.618 "is_configured": true, 00:15:23.618 "data_offset": 2048, 00:15:23.618 "data_size": 63488 00:15:23.618 }, 00:15:23.618 { 00:15:23.618 "name": "BaseBdev3", 00:15:23.618 "uuid": "c683cc38-62ae-454f-a060-779ea51e0b21", 00:15:23.618 "is_configured": true, 00:15:23.618 "data_offset": 2048, 00:15:23.618 "data_size": 63488 00:15:23.618 } 00:15:23.618 ] 00:15:23.618 } 00:15:23.618 } 00:15:23.618 }' 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:23.618 BaseBdev2 00:15:23.618 BaseBdev3' 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:23.618 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.619 16:11:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.619 [2024-12-12 16:11:49.967178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.878 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.878 "name": "Existed_Raid", 00:15:23.879 "uuid": "49fce4e8-f8b7-4ca0-9ef4-54baefde3452", 00:15:23.879 "strip_size_kb": 64, 00:15:23.879 "state": "online", 00:15:23.879 "raid_level": "raid5f", 00:15:23.879 "superblock": true, 00:15:23.879 "num_base_bdevs": 3, 00:15:23.879 "num_base_bdevs_discovered": 2, 00:15:23.879 "num_base_bdevs_operational": 2, 00:15:23.879 "base_bdevs_list": [ 00:15:23.879 { 00:15:23.879 "name": null, 00:15:23.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.879 "is_configured": false, 00:15:23.879 "data_offset": 0, 00:15:23.879 "data_size": 63488 00:15:23.879 }, 00:15:23.879 { 00:15:23.879 "name": "BaseBdev2", 00:15:23.879 "uuid": "06c01417-853c-4b96-9881-053e148d15ad", 00:15:23.879 "is_configured": true, 00:15:23.879 "data_offset": 2048, 00:15:23.879 "data_size": 63488 00:15:23.879 }, 00:15:23.879 { 00:15:23.879 "name": "BaseBdev3", 00:15:23.879 "uuid": "c683cc38-62ae-454f-a060-779ea51e0b21", 00:15:23.879 "is_configured": true, 00:15:23.879 "data_offset": 2048, 00:15:23.879 "data_size": 63488 00:15:23.879 } 00:15:23.879 ] 00:15:23.879 }' 00:15:23.879 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.879 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.447 [2024-12-12 16:11:50.591888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.447 [2024-12-12 16:11:50.592138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.447 [2024-12-12 16:11:50.707392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.447 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.447 [2024-12-12 16:11:50.767352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.447 [2024-12-12 16:11:50.767543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.706 BaseBdev2 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.706 16:11:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.706 [ 00:15:24.706 { 00:15:24.706 "name": "BaseBdev2", 00:15:24.706 "aliases": [ 00:15:24.706 "3738b545-26ef-4b06-a31d-a1467bc586cb" 00:15:24.706 ], 00:15:24.706 "product_name": "Malloc disk", 00:15:24.706 "block_size": 512, 00:15:24.706 "num_blocks": 65536, 00:15:24.706 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:24.706 "assigned_rate_limits": { 00:15:24.706 "rw_ios_per_sec": 0, 00:15:24.706 "rw_mbytes_per_sec": 0, 00:15:24.706 "r_mbytes_per_sec": 0, 00:15:24.706 "w_mbytes_per_sec": 0 00:15:24.706 }, 00:15:24.706 "claimed": false, 00:15:24.706 "zoned": false, 00:15:24.706 "supported_io_types": { 00:15:24.706 "read": true, 00:15:24.706 "write": true, 00:15:24.706 "unmap": true, 00:15:24.706 "flush": true, 00:15:24.706 "reset": true, 00:15:24.706 "nvme_admin": false, 00:15:24.706 "nvme_io": false, 00:15:24.706 "nvme_io_md": false, 00:15:24.706 "write_zeroes": true, 00:15:24.706 "zcopy": true, 00:15:24.706 "get_zone_info": false, 00:15:24.706 "zone_management": false, 00:15:24.706 "zone_append": false, 00:15:24.706 "compare": false, 00:15:24.706 "compare_and_write": false, 00:15:24.706 "abort": true, 00:15:24.706 "seek_hole": false, 00:15:24.706 "seek_data": false, 00:15:24.706 "copy": true, 00:15:24.706 "nvme_iov_md": false 00:15:24.706 }, 00:15:24.706 "memory_domains": [ 00:15:24.706 { 00:15:24.706 "dma_device_id": "system", 00:15:24.706 "dma_device_type": 1 00:15:24.706 }, 00:15:24.706 { 00:15:24.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.706 "dma_device_type": 2 00:15:24.706 } 00:15:24.706 ], 00:15:24.706 "driver_specific": {} 00:15:24.706 } 00:15:24.706 ] 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.706 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.966 BaseBdev3 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.966 [ 00:15:24.966 { 00:15:24.966 "name": "BaseBdev3", 00:15:24.966 "aliases": [ 00:15:24.966 "4618563b-b1fe-451a-885d-6916284a77b3" 00:15:24.966 ], 00:15:24.966 "product_name": "Malloc disk", 00:15:24.966 "block_size": 512, 00:15:24.966 "num_blocks": 65536, 00:15:24.966 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:24.966 "assigned_rate_limits": { 00:15:24.966 "rw_ios_per_sec": 0, 00:15:24.966 "rw_mbytes_per_sec": 0, 00:15:24.966 "r_mbytes_per_sec": 0, 00:15:24.966 "w_mbytes_per_sec": 0 00:15:24.966 }, 00:15:24.966 "claimed": false, 00:15:24.966 "zoned": false, 00:15:24.966 "supported_io_types": { 00:15:24.966 "read": true, 00:15:24.966 "write": true, 00:15:24.966 "unmap": true, 00:15:24.966 "flush": true, 00:15:24.966 "reset": true, 00:15:24.966 "nvme_admin": false, 00:15:24.966 "nvme_io": false, 00:15:24.966 "nvme_io_md": false, 00:15:24.966 "write_zeroes": true, 00:15:24.966 "zcopy": true, 00:15:24.966 "get_zone_info": false, 00:15:24.966 "zone_management": false, 00:15:24.966 "zone_append": false, 00:15:24.966 "compare": false, 00:15:24.966 "compare_and_write": false, 00:15:24.966 "abort": true, 00:15:24.966 "seek_hole": false, 00:15:24.966 "seek_data": false, 00:15:24.966 "copy": true, 00:15:24.966 "nvme_iov_md": false 00:15:24.966 }, 00:15:24.966 "memory_domains": [ 00:15:24.966 { 00:15:24.966 "dma_device_id": "system", 00:15:24.966 "dma_device_type": 1 00:15:24.966 }, 00:15:24.966 { 00:15:24.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.966 "dma_device_type": 2 00:15:24.966 } 00:15:24.966 ], 00:15:24.966 "driver_specific": {} 00:15:24.966 } 00:15:24.966 ] 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.966 [2024-12-12 16:11:51.130038] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.966 [2024-12-12 16:11:51.130226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.966 [2024-12-12 16:11:51.130291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.966 [2024-12-12 16:11:51.132835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.966 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.966 "name": "Existed_Raid", 00:15:24.966 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:24.966 "strip_size_kb": 64, 00:15:24.966 "state": "configuring", 00:15:24.966 "raid_level": "raid5f", 00:15:24.966 "superblock": true, 00:15:24.966 "num_base_bdevs": 3, 00:15:24.966 "num_base_bdevs_discovered": 2, 00:15:24.966 "num_base_bdevs_operational": 3, 00:15:24.966 "base_bdevs_list": [ 00:15:24.966 { 00:15:24.966 "name": "BaseBdev1", 00:15:24.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.966 "is_configured": false, 00:15:24.966 "data_offset": 0, 00:15:24.966 "data_size": 0 00:15:24.966 }, 00:15:24.966 { 00:15:24.966 "name": "BaseBdev2", 00:15:24.966 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:24.966 "is_configured": true, 00:15:24.966 "data_offset": 2048, 00:15:24.966 "data_size": 63488 00:15:24.966 }, 00:15:24.967 { 00:15:24.967 "name": "BaseBdev3", 00:15:24.967 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:24.967 "is_configured": true, 00:15:24.967 "data_offset": 2048, 00:15:24.967 "data_size": 63488 00:15:24.967 } 00:15:24.967 ] 00:15:24.967 }' 00:15:24.967 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.967 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:25.225 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.225 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 [2024-12-12 16:11:51.569330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.485 "name": "Existed_Raid", 00:15:25.485 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:25.485 "strip_size_kb": 64, 00:15:25.485 "state": "configuring", 00:15:25.485 "raid_level": "raid5f", 00:15:25.485 "superblock": true, 00:15:25.485 "num_base_bdevs": 3, 00:15:25.485 "num_base_bdevs_discovered": 1, 00:15:25.485 "num_base_bdevs_operational": 3, 00:15:25.485 "base_bdevs_list": [ 00:15:25.485 { 00:15:25.485 "name": "BaseBdev1", 00:15:25.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.485 "is_configured": false, 00:15:25.485 "data_offset": 0, 00:15:25.485 "data_size": 0 00:15:25.485 }, 00:15:25.485 { 00:15:25.485 "name": null, 00:15:25.485 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:25.485 "is_configured": false, 00:15:25.485 "data_offset": 0, 00:15:25.485 "data_size": 63488 00:15:25.485 }, 00:15:25.485 { 00:15:25.485 "name": "BaseBdev3", 00:15:25.485 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:25.485 "is_configured": true, 00:15:25.485 "data_offset": 2048, 00:15:25.485 "data_size": 63488 00:15:25.485 } 00:15:25.485 ] 00:15:25.485 }' 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.485 16:11:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.745 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 BaseBdev1 00:15:26.004 [2024-12-12 16:11:52.115368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.004 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 [ 00:15:26.004 { 00:15:26.004 "name": "BaseBdev1", 00:15:26.004 "aliases": [ 00:15:26.004 "1e62f818-fd3d-472c-a49a-689099f95885" 00:15:26.004 ], 00:15:26.004 "product_name": "Malloc disk", 00:15:26.004 "block_size": 512, 00:15:26.004 "num_blocks": 65536, 00:15:26.004 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:26.004 "assigned_rate_limits": { 00:15:26.004 "rw_ios_per_sec": 0, 00:15:26.004 "rw_mbytes_per_sec": 0, 00:15:26.004 "r_mbytes_per_sec": 0, 00:15:26.004 "w_mbytes_per_sec": 0 00:15:26.004 }, 00:15:26.004 "claimed": true, 00:15:26.004 "claim_type": "exclusive_write", 00:15:26.004 "zoned": false, 00:15:26.004 "supported_io_types": { 00:15:26.004 "read": true, 00:15:26.004 "write": true, 00:15:26.004 "unmap": true, 00:15:26.004 "flush": true, 00:15:26.004 "reset": true, 00:15:26.004 "nvme_admin": false, 00:15:26.004 "nvme_io": false, 00:15:26.004 "nvme_io_md": false, 00:15:26.004 "write_zeroes": true, 00:15:26.004 "zcopy": true, 00:15:26.004 "get_zone_info": false, 00:15:26.004 "zone_management": false, 00:15:26.004 "zone_append": false, 00:15:26.004 "compare": false, 00:15:26.004 "compare_and_write": false, 00:15:26.004 "abort": true, 00:15:26.004 "seek_hole": false, 00:15:26.004 "seek_data": false, 00:15:26.004 "copy": true, 00:15:26.004 "nvme_iov_md": false 00:15:26.004 }, 00:15:26.004 "memory_domains": [ 00:15:26.004 { 00:15:26.004 "dma_device_id": "system", 00:15:26.004 "dma_device_type": 1 00:15:26.004 }, 00:15:26.004 { 00:15:26.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.004 "dma_device_type": 2 00:15:26.004 } 00:15:26.004 ], 00:15:26.004 "driver_specific": {} 00:15:26.004 } 00:15:26.004 ] 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.005 "name": "Existed_Raid", 00:15:26.005 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:26.005 "strip_size_kb": 64, 00:15:26.005 "state": "configuring", 00:15:26.005 "raid_level": "raid5f", 00:15:26.005 "superblock": true, 00:15:26.005 "num_base_bdevs": 3, 00:15:26.005 "num_base_bdevs_discovered": 2, 00:15:26.005 "num_base_bdevs_operational": 3, 00:15:26.005 "base_bdevs_list": [ 00:15:26.005 { 00:15:26.005 "name": "BaseBdev1", 00:15:26.005 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:26.005 "is_configured": true, 00:15:26.005 "data_offset": 2048, 00:15:26.005 "data_size": 63488 00:15:26.005 }, 00:15:26.005 { 00:15:26.005 "name": null, 00:15:26.005 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:26.005 "is_configured": false, 00:15:26.005 "data_offset": 0, 00:15:26.005 "data_size": 63488 00:15:26.005 }, 00:15:26.005 { 00:15:26.005 "name": "BaseBdev3", 00:15:26.005 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:26.005 "is_configured": true, 00:15:26.005 "data_offset": 2048, 00:15:26.005 "data_size": 63488 00:15:26.005 } 00:15:26.005 ] 00:15:26.005 }' 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.005 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.264 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.264 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.264 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.264 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 [2024-12-12 16:11:52.646625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.523 "name": "Existed_Raid", 00:15:26.523 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:26.523 "strip_size_kb": 64, 00:15:26.523 "state": "configuring", 00:15:26.523 "raid_level": "raid5f", 00:15:26.523 "superblock": true, 00:15:26.523 "num_base_bdevs": 3, 00:15:26.523 "num_base_bdevs_discovered": 1, 00:15:26.523 "num_base_bdevs_operational": 3, 00:15:26.523 "base_bdevs_list": [ 00:15:26.523 { 00:15:26.523 "name": "BaseBdev1", 00:15:26.523 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:26.523 "is_configured": true, 00:15:26.523 "data_offset": 2048, 00:15:26.523 "data_size": 63488 00:15:26.523 }, 00:15:26.523 { 00:15:26.523 "name": null, 00:15:26.523 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:26.523 "is_configured": false, 00:15:26.523 "data_offset": 0, 00:15:26.523 "data_size": 63488 00:15:26.523 }, 00:15:26.523 { 00:15:26.523 "name": null, 00:15:26.523 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:26.523 "is_configured": false, 00:15:26.523 "data_offset": 0, 00:15:26.523 "data_size": 63488 00:15:26.523 } 00:15:26.523 ] 00:15:26.523 }' 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.523 16:11:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.782 [2024-12-12 16:11:53.085990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.782 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.041 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.041 "name": "Existed_Raid", 00:15:27.041 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:27.041 "strip_size_kb": 64, 00:15:27.041 "state": "configuring", 00:15:27.041 "raid_level": "raid5f", 00:15:27.041 "superblock": true, 00:15:27.041 "num_base_bdevs": 3, 00:15:27.041 "num_base_bdevs_discovered": 2, 00:15:27.041 "num_base_bdevs_operational": 3, 00:15:27.041 "base_bdevs_list": [ 00:15:27.041 { 00:15:27.041 "name": "BaseBdev1", 00:15:27.041 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:27.041 "is_configured": true, 00:15:27.041 "data_offset": 2048, 00:15:27.041 "data_size": 63488 00:15:27.041 }, 00:15:27.041 { 00:15:27.041 "name": null, 00:15:27.041 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:27.041 "is_configured": false, 00:15:27.041 "data_offset": 0, 00:15:27.041 "data_size": 63488 00:15:27.041 }, 00:15:27.041 { 00:15:27.041 "name": "BaseBdev3", 00:15:27.041 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:27.041 "is_configured": true, 00:15:27.041 "data_offset": 2048, 00:15:27.041 "data_size": 63488 00:15:27.041 } 00:15:27.041 ] 00:15:27.041 }' 00:15:27.041 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.041 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.300 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.300 [2024-12-12 16:11:53.577208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.559 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.560 "name": "Existed_Raid", 00:15:27.560 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:27.560 "strip_size_kb": 64, 00:15:27.560 "state": "configuring", 00:15:27.560 "raid_level": "raid5f", 00:15:27.560 "superblock": true, 00:15:27.560 "num_base_bdevs": 3, 00:15:27.560 "num_base_bdevs_discovered": 1, 00:15:27.560 "num_base_bdevs_operational": 3, 00:15:27.560 "base_bdevs_list": [ 00:15:27.560 { 00:15:27.560 "name": null, 00:15:27.560 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:27.560 "is_configured": false, 00:15:27.560 "data_offset": 0, 00:15:27.560 "data_size": 63488 00:15:27.560 }, 00:15:27.560 { 00:15:27.560 "name": null, 00:15:27.560 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:27.560 "is_configured": false, 00:15:27.560 "data_offset": 0, 00:15:27.560 "data_size": 63488 00:15:27.560 }, 00:15:27.560 { 00:15:27.560 "name": "BaseBdev3", 00:15:27.560 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:27.560 "is_configured": true, 00:15:27.560 "data_offset": 2048, 00:15:27.560 "data_size": 63488 00:15:27.560 } 00:15:27.560 ] 00:15:27.560 }' 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.560 16:11:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.819 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.819 [2024-12-12 16:11:54.168999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.078 "name": "Existed_Raid", 00:15:28.078 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:28.078 "strip_size_kb": 64, 00:15:28.078 "state": "configuring", 00:15:28.078 "raid_level": "raid5f", 00:15:28.078 "superblock": true, 00:15:28.078 "num_base_bdevs": 3, 00:15:28.078 "num_base_bdevs_discovered": 2, 00:15:28.078 "num_base_bdevs_operational": 3, 00:15:28.078 "base_bdevs_list": [ 00:15:28.078 { 00:15:28.078 "name": null, 00:15:28.078 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:28.078 "is_configured": false, 00:15:28.078 "data_offset": 0, 00:15:28.078 "data_size": 63488 00:15:28.078 }, 00:15:28.078 { 00:15:28.078 "name": "BaseBdev2", 00:15:28.078 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:28.078 "is_configured": true, 00:15:28.078 "data_offset": 2048, 00:15:28.078 "data_size": 63488 00:15:28.078 }, 00:15:28.078 { 00:15:28.078 "name": "BaseBdev3", 00:15:28.078 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:28.078 "is_configured": true, 00:15:28.078 "data_offset": 2048, 00:15:28.078 "data_size": 63488 00:15:28.078 } 00:15:28.078 ] 00:15:28.078 }' 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.078 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.338 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.338 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.338 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.338 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.338 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e62f818-fd3d-472c-a49a-689099f95885 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 [2024-12-12 16:11:54.774724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:28.597 [2024-12-12 16:11:54.775062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:28.597 [2024-12-12 16:11:54.775086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:28.597 NewBaseBdev 00:15:28.597 [2024-12-12 16:11:54.775395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 [2024-12-12 16:11:54.781274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:28.597 [2024-12-12 16:11:54.781392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:28.597 [2024-12-12 16:11:54.781678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 [ 00:15:28.597 { 00:15:28.597 "name": "NewBaseBdev", 00:15:28.597 "aliases": [ 00:15:28.597 "1e62f818-fd3d-472c-a49a-689099f95885" 00:15:28.597 ], 00:15:28.597 "product_name": "Malloc disk", 00:15:28.597 "block_size": 512, 00:15:28.597 "num_blocks": 65536, 00:15:28.597 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:28.597 "assigned_rate_limits": { 00:15:28.597 "rw_ios_per_sec": 0, 00:15:28.597 "rw_mbytes_per_sec": 0, 00:15:28.597 "r_mbytes_per_sec": 0, 00:15:28.597 "w_mbytes_per_sec": 0 00:15:28.597 }, 00:15:28.597 "claimed": true, 00:15:28.597 "claim_type": "exclusive_write", 00:15:28.597 "zoned": false, 00:15:28.597 "supported_io_types": { 00:15:28.597 "read": true, 00:15:28.597 "write": true, 00:15:28.597 "unmap": true, 00:15:28.597 "flush": true, 00:15:28.597 "reset": true, 00:15:28.597 "nvme_admin": false, 00:15:28.597 "nvme_io": false, 00:15:28.597 "nvme_io_md": false, 00:15:28.597 "write_zeroes": true, 00:15:28.597 "zcopy": true, 00:15:28.597 "get_zone_info": false, 00:15:28.597 "zone_management": false, 00:15:28.597 "zone_append": false, 00:15:28.597 "compare": false, 00:15:28.597 "compare_and_write": false, 00:15:28.597 "abort": true, 00:15:28.597 "seek_hole": false, 00:15:28.597 "seek_data": false, 00:15:28.597 "copy": true, 00:15:28.597 "nvme_iov_md": false 00:15:28.597 }, 00:15:28.597 "memory_domains": [ 00:15:28.597 { 00:15:28.597 "dma_device_id": "system", 00:15:28.597 "dma_device_type": 1 00:15:28.597 }, 00:15:28.597 { 00:15:28.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.597 "dma_device_type": 2 00:15:28.597 } 00:15:28.597 ], 00:15:28.597 "driver_specific": {} 00:15:28.597 } 00:15:28.597 ] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.597 "name": "Existed_Raid", 00:15:28.597 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:28.597 "strip_size_kb": 64, 00:15:28.597 "state": "online", 00:15:28.597 "raid_level": "raid5f", 00:15:28.597 "superblock": true, 00:15:28.597 "num_base_bdevs": 3, 00:15:28.597 "num_base_bdevs_discovered": 3, 00:15:28.597 "num_base_bdevs_operational": 3, 00:15:28.597 "base_bdevs_list": [ 00:15:28.597 { 00:15:28.597 "name": "NewBaseBdev", 00:15:28.597 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:28.597 "is_configured": true, 00:15:28.597 "data_offset": 2048, 00:15:28.597 "data_size": 63488 00:15:28.597 }, 00:15:28.597 { 00:15:28.597 "name": "BaseBdev2", 00:15:28.597 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:28.597 "is_configured": true, 00:15:28.597 "data_offset": 2048, 00:15:28.597 "data_size": 63488 00:15:28.597 }, 00:15:28.597 { 00:15:28.597 "name": "BaseBdev3", 00:15:28.597 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:28.597 "is_configured": true, 00:15:28.597 "data_offset": 2048, 00:15:28.597 "data_size": 63488 00:15:28.597 } 00:15:28.597 ] 00:15:28.597 }' 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.597 16:11:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.165 [2024-12-12 16:11:55.252726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:29.165 "name": "Existed_Raid", 00:15:29.165 "aliases": [ 00:15:29.165 "a52c56c4-c665-4e2e-b848-3b0d0befbec1" 00:15:29.165 ], 00:15:29.165 "product_name": "Raid Volume", 00:15:29.165 "block_size": 512, 00:15:29.165 "num_blocks": 126976, 00:15:29.165 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:29.165 "assigned_rate_limits": { 00:15:29.165 "rw_ios_per_sec": 0, 00:15:29.165 "rw_mbytes_per_sec": 0, 00:15:29.165 "r_mbytes_per_sec": 0, 00:15:29.165 "w_mbytes_per_sec": 0 00:15:29.165 }, 00:15:29.165 "claimed": false, 00:15:29.165 "zoned": false, 00:15:29.165 "supported_io_types": { 00:15:29.165 "read": true, 00:15:29.165 "write": true, 00:15:29.165 "unmap": false, 00:15:29.165 "flush": false, 00:15:29.165 "reset": true, 00:15:29.165 "nvme_admin": false, 00:15:29.165 "nvme_io": false, 00:15:29.165 "nvme_io_md": false, 00:15:29.165 "write_zeroes": true, 00:15:29.165 "zcopy": false, 00:15:29.165 "get_zone_info": false, 00:15:29.165 "zone_management": false, 00:15:29.165 "zone_append": false, 00:15:29.165 "compare": false, 00:15:29.165 "compare_and_write": false, 00:15:29.165 "abort": false, 00:15:29.165 "seek_hole": false, 00:15:29.165 "seek_data": false, 00:15:29.165 "copy": false, 00:15:29.165 "nvme_iov_md": false 00:15:29.165 }, 00:15:29.165 "driver_specific": { 00:15:29.165 "raid": { 00:15:29.165 "uuid": "a52c56c4-c665-4e2e-b848-3b0d0befbec1", 00:15:29.165 "strip_size_kb": 64, 00:15:29.165 "state": "online", 00:15:29.165 "raid_level": "raid5f", 00:15:29.165 "superblock": true, 00:15:29.165 "num_base_bdevs": 3, 00:15:29.165 "num_base_bdevs_discovered": 3, 00:15:29.165 "num_base_bdevs_operational": 3, 00:15:29.165 "base_bdevs_list": [ 00:15:29.165 { 00:15:29.165 "name": "NewBaseBdev", 00:15:29.165 "uuid": "1e62f818-fd3d-472c-a49a-689099f95885", 00:15:29.165 "is_configured": true, 00:15:29.165 "data_offset": 2048, 00:15:29.165 "data_size": 63488 00:15:29.165 }, 00:15:29.165 { 00:15:29.165 "name": "BaseBdev2", 00:15:29.165 "uuid": "3738b545-26ef-4b06-a31d-a1467bc586cb", 00:15:29.165 "is_configured": true, 00:15:29.165 "data_offset": 2048, 00:15:29.165 "data_size": 63488 00:15:29.165 }, 00:15:29.165 { 00:15:29.165 "name": "BaseBdev3", 00:15:29.165 "uuid": "4618563b-b1fe-451a-885d-6916284a77b3", 00:15:29.165 "is_configured": true, 00:15:29.165 "data_offset": 2048, 00:15:29.165 "data_size": 63488 00:15:29.165 } 00:15:29.165 ] 00:15:29.165 } 00:15:29.165 } 00:15:29.165 }' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:29.165 BaseBdev2 00:15:29.165 BaseBdev3' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.165 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.166 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:29.166 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.166 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.166 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.166 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.425 [2024-12-12 16:11:55.524022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.425 [2024-12-12 16:11:55.524074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.425 [2024-12-12 16:11:55.524184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.425 [2024-12-12 16:11:55.524511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.425 [2024-12-12 16:11:55.524530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82590 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82590 ']' 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82590 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82590 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82590' 00:15:29.425 killing process with pid 82590 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82590 00:15:29.425 [2024-12-12 16:11:55.572459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.425 16:11:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82590 00:15:29.685 [2024-12-12 16:11:55.901726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.067 16:11:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:31.067 00:15:31.067 real 0m11.002s 00:15:31.067 user 0m17.048s 00:15:31.067 sys 0m2.027s 00:15:31.067 ************************************ 00:15:31.067 END TEST raid5f_state_function_test_sb 00:15:31.067 ************************************ 00:15:31.067 16:11:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.067 16:11:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.067 16:11:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:31.067 16:11:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:31.067 16:11:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.067 16:11:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.067 ************************************ 00:15:31.067 START TEST raid5f_superblock_test 00:15:31.067 ************************************ 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83215 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83215 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83215 ']' 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.067 16:11:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.327 [2024-12-12 16:11:57.448594] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:31.327 [2024-12-12 16:11:57.448768] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83215 ] 00:15:31.327 [2024-12-12 16:11:57.601588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.587 [2024-12-12 16:11:57.739622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.847 [2024-12-12 16:11:57.960652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.847 [2024-12-12 16:11:57.960752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.106 malloc1 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.106 [2024-12-12 16:11:58.370709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.106 [2024-12-12 16:11:58.370886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.106 [2024-12-12 16:11:58.370951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:32.106 [2024-12-12 16:11:58.370992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.106 [2024-12-12 16:11:58.373486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.106 [2024-12-12 16:11:58.373578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.106 pt1 00:15:32.106 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.107 malloc2 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.107 [2024-12-12 16:11:58.437304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:32.107 [2024-12-12 16:11:58.437367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.107 [2024-12-12 16:11:58.437395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:32.107 [2024-12-12 16:11:58.437407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.107 [2024-12-12 16:11:58.439914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.107 [2024-12-12 16:11:58.439959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:32.107 pt2 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.107 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.367 malloc3 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.367 [2024-12-12 16:11:58.533166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:32.367 [2024-12-12 16:11:58.533307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.367 [2024-12-12 16:11:58.533355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:32.367 [2024-12-12 16:11:58.533396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.367 [2024-12-12 16:11:58.535859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.367 [2024-12-12 16:11:58.535961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:32.367 pt3 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.367 [2024-12-12 16:11:58.545176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.367 [2024-12-12 16:11:58.547331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.367 [2024-12-12 16:11:58.547412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:32.367 [2024-12-12 16:11:58.547608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:32.367 [2024-12-12 16:11:58.547634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:32.367 [2024-12-12 16:11:58.547888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:32.367 [2024-12-12 16:11:58.553608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:32.367 [2024-12-12 16:11:58.553683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:32.367 [2024-12-12 16:11:58.553924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.367 "name": "raid_bdev1", 00:15:32.367 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:32.367 "strip_size_kb": 64, 00:15:32.367 "state": "online", 00:15:32.367 "raid_level": "raid5f", 00:15:32.367 "superblock": true, 00:15:32.367 "num_base_bdevs": 3, 00:15:32.367 "num_base_bdevs_discovered": 3, 00:15:32.367 "num_base_bdevs_operational": 3, 00:15:32.367 "base_bdevs_list": [ 00:15:32.367 { 00:15:32.367 "name": "pt1", 00:15:32.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.367 "is_configured": true, 00:15:32.367 "data_offset": 2048, 00:15:32.367 "data_size": 63488 00:15:32.367 }, 00:15:32.367 { 00:15:32.367 "name": "pt2", 00:15:32.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.367 "is_configured": true, 00:15:32.367 "data_offset": 2048, 00:15:32.367 "data_size": 63488 00:15:32.367 }, 00:15:32.367 { 00:15:32.367 "name": "pt3", 00:15:32.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.367 "is_configured": true, 00:15:32.367 "data_offset": 2048, 00:15:32.367 "data_size": 63488 00:15:32.367 } 00:15:32.367 ] 00:15:32.367 }' 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.367 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.627 16:11:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.887 [2024-12-12 16:11:58.981213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.887 16:11:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.887 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.887 "name": "raid_bdev1", 00:15:32.887 "aliases": [ 00:15:32.887 "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2" 00:15:32.887 ], 00:15:32.887 "product_name": "Raid Volume", 00:15:32.887 "block_size": 512, 00:15:32.887 "num_blocks": 126976, 00:15:32.887 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:32.887 "assigned_rate_limits": { 00:15:32.887 "rw_ios_per_sec": 0, 00:15:32.887 "rw_mbytes_per_sec": 0, 00:15:32.887 "r_mbytes_per_sec": 0, 00:15:32.887 "w_mbytes_per_sec": 0 00:15:32.887 }, 00:15:32.887 "claimed": false, 00:15:32.887 "zoned": false, 00:15:32.887 "supported_io_types": { 00:15:32.887 "read": true, 00:15:32.887 "write": true, 00:15:32.887 "unmap": false, 00:15:32.887 "flush": false, 00:15:32.887 "reset": true, 00:15:32.887 "nvme_admin": false, 00:15:32.887 "nvme_io": false, 00:15:32.887 "nvme_io_md": false, 00:15:32.887 "write_zeroes": true, 00:15:32.887 "zcopy": false, 00:15:32.887 "get_zone_info": false, 00:15:32.887 "zone_management": false, 00:15:32.887 "zone_append": false, 00:15:32.887 "compare": false, 00:15:32.887 "compare_and_write": false, 00:15:32.887 "abort": false, 00:15:32.887 "seek_hole": false, 00:15:32.887 "seek_data": false, 00:15:32.887 "copy": false, 00:15:32.887 "nvme_iov_md": false 00:15:32.887 }, 00:15:32.887 "driver_specific": { 00:15:32.887 "raid": { 00:15:32.887 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:32.887 "strip_size_kb": 64, 00:15:32.887 "state": "online", 00:15:32.887 "raid_level": "raid5f", 00:15:32.887 "superblock": true, 00:15:32.887 "num_base_bdevs": 3, 00:15:32.887 "num_base_bdevs_discovered": 3, 00:15:32.887 "num_base_bdevs_operational": 3, 00:15:32.887 "base_bdevs_list": [ 00:15:32.887 { 00:15:32.887 "name": "pt1", 00:15:32.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.887 "is_configured": true, 00:15:32.887 "data_offset": 2048, 00:15:32.888 "data_size": 63488 00:15:32.888 }, 00:15:32.888 { 00:15:32.888 "name": "pt2", 00:15:32.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.888 "is_configured": true, 00:15:32.888 "data_offset": 2048, 00:15:32.888 "data_size": 63488 00:15:32.888 }, 00:15:32.888 { 00:15:32.888 "name": "pt3", 00:15:32.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.888 "is_configured": true, 00:15:32.888 "data_offset": 2048, 00:15:32.888 "data_size": 63488 00:15:32.888 } 00:15:32.888 ] 00:15:32.888 } 00:15:32.888 } 00:15:32.888 }' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:32.888 pt2 00:15:32.888 pt3' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.888 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.148 [2024-12-12 16:11:59.257005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2 ']' 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.148 [2024-12-12 16:11:59.304729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.148 [2024-12-12 16:11:59.304825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.148 [2024-12-12 16:11:59.304924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.148 [2024-12-12 16:11:59.305002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.148 [2024-12-12 16:11:59.305014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.148 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.149 [2024-12-12 16:11:59.452531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:33.149 [2024-12-12 16:11:59.454773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:33.149 [2024-12-12 16:11:59.454887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:33.149 [2024-12-12 16:11:59.454986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:33.149 [2024-12-12 16:11:59.455089] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:33.149 [2024-12-12 16:11:59.455163] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:33.149 [2024-12-12 16:11:59.455235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.149 [2024-12-12 16:11:59.455274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:33.149 request: 00:15:33.149 { 00:15:33.149 "name": "raid_bdev1", 00:15:33.149 "raid_level": "raid5f", 00:15:33.149 "base_bdevs": [ 00:15:33.149 "malloc1", 00:15:33.149 "malloc2", 00:15:33.149 "malloc3" 00:15:33.149 ], 00:15:33.149 "strip_size_kb": 64, 00:15:33.149 "superblock": false, 00:15:33.149 "method": "bdev_raid_create", 00:15:33.149 "req_id": 1 00:15:33.149 } 00:15:33.149 Got JSON-RPC error response 00:15:33.149 response: 00:15:33.149 { 00:15:33.149 "code": -17, 00:15:33.149 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:33.149 } 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.149 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.409 [2024-12-12 16:11:59.508378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:33.409 [2024-12-12 16:11:59.508479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.409 [2024-12-12 16:11:59.508524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:33.409 [2024-12-12 16:11:59.508568] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.409 [2024-12-12 16:11:59.511071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.409 [2024-12-12 16:11:59.511155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:33.409 [2024-12-12 16:11:59.511266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:33.409 [2024-12-12 16:11:59.511347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:33.409 pt1 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.409 "name": "raid_bdev1", 00:15:33.409 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:33.409 "strip_size_kb": 64, 00:15:33.409 "state": "configuring", 00:15:33.409 "raid_level": "raid5f", 00:15:33.409 "superblock": true, 00:15:33.409 "num_base_bdevs": 3, 00:15:33.409 "num_base_bdevs_discovered": 1, 00:15:33.409 "num_base_bdevs_operational": 3, 00:15:33.409 "base_bdevs_list": [ 00:15:33.409 { 00:15:33.409 "name": "pt1", 00:15:33.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.409 "is_configured": true, 00:15:33.409 "data_offset": 2048, 00:15:33.409 "data_size": 63488 00:15:33.409 }, 00:15:33.409 { 00:15:33.409 "name": null, 00:15:33.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.409 "is_configured": false, 00:15:33.409 "data_offset": 2048, 00:15:33.409 "data_size": 63488 00:15:33.409 }, 00:15:33.409 { 00:15:33.409 "name": null, 00:15:33.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.409 "is_configured": false, 00:15:33.409 "data_offset": 2048, 00:15:33.409 "data_size": 63488 00:15:33.409 } 00:15:33.409 ] 00:15:33.409 }' 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.409 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 [2024-12-12 16:11:59.947649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.670 [2024-12-12 16:11:59.947704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.670 [2024-12-12 16:11:59.947725] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:33.670 [2024-12-12 16:11:59.947735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.670 [2024-12-12 16:11:59.948192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.670 [2024-12-12 16:11:59.948221] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.670 [2024-12-12 16:11:59.948297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:33.670 [2024-12-12 16:11:59.948332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.670 pt2 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 [2024-12-12 16:11:59.959655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 16:11:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.670 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.670 "name": "raid_bdev1", 00:15:33.670 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:33.670 "strip_size_kb": 64, 00:15:33.670 "state": "configuring", 00:15:33.670 "raid_level": "raid5f", 00:15:33.670 "superblock": true, 00:15:33.670 "num_base_bdevs": 3, 00:15:33.670 "num_base_bdevs_discovered": 1, 00:15:33.670 "num_base_bdevs_operational": 3, 00:15:33.670 "base_bdevs_list": [ 00:15:33.670 { 00:15:33.670 "name": "pt1", 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.670 "is_configured": true, 00:15:33.670 "data_offset": 2048, 00:15:33.670 "data_size": 63488 00:15:33.670 }, 00:15:33.670 { 00:15:33.670 "name": null, 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.670 "is_configured": false, 00:15:33.670 "data_offset": 0, 00:15:33.670 "data_size": 63488 00:15:33.670 }, 00:15:33.670 { 00:15:33.670 "name": null, 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.670 "is_configured": false, 00:15:33.670 "data_offset": 2048, 00:15:33.670 "data_size": 63488 00:15:33.670 } 00:15:33.670 ] 00:15:33.670 }' 00:15:33.670 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.670 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.247 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:34.247 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:34.247 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.248 [2024-12-12 16:12:00.430817] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.248 [2024-12-12 16:12:00.430969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.248 [2024-12-12 16:12:00.431009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:34.248 [2024-12-12 16:12:00.431049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.248 [2024-12-12 16:12:00.431526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.248 [2024-12-12 16:12:00.431574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.248 [2024-12-12 16:12:00.431657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:34.248 [2024-12-12 16:12:00.431682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.248 pt2 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.248 [2024-12-12 16:12:00.442787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:34.248 [2024-12-12 16:12:00.442886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.248 [2024-12-12 16:12:00.442935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:34.248 [2024-12-12 16:12:00.442970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.248 [2024-12-12 16:12:00.443381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.248 [2024-12-12 16:12:00.443457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:34.248 [2024-12-12 16:12:00.443552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:34.248 [2024-12-12 16:12:00.443614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:34.248 [2024-12-12 16:12:00.443771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:34.248 [2024-12-12 16:12:00.443818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.248 [2024-12-12 16:12:00.444100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:34.248 [2024-12-12 16:12:00.448782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:34.248 [2024-12-12 16:12:00.448845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:34.248 [2024-12-12 16:12:00.449082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.248 pt3 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.248 "name": "raid_bdev1", 00:15:34.248 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:34.248 "strip_size_kb": 64, 00:15:34.248 "state": "online", 00:15:34.248 "raid_level": "raid5f", 00:15:34.248 "superblock": true, 00:15:34.248 "num_base_bdevs": 3, 00:15:34.248 "num_base_bdevs_discovered": 3, 00:15:34.248 "num_base_bdevs_operational": 3, 00:15:34.248 "base_bdevs_list": [ 00:15:34.248 { 00:15:34.248 "name": "pt1", 00:15:34.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.248 "is_configured": true, 00:15:34.248 "data_offset": 2048, 00:15:34.248 "data_size": 63488 00:15:34.248 }, 00:15:34.248 { 00:15:34.248 "name": "pt2", 00:15:34.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.248 "is_configured": true, 00:15:34.248 "data_offset": 2048, 00:15:34.248 "data_size": 63488 00:15:34.248 }, 00:15:34.248 { 00:15:34.248 "name": "pt3", 00:15:34.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.248 "is_configured": true, 00:15:34.248 "data_offset": 2048, 00:15:34.248 "data_size": 63488 00:15:34.248 } 00:15:34.248 ] 00:15:34.248 }' 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.248 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.830 [2024-12-12 16:12:00.943420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.830 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.830 "name": "raid_bdev1", 00:15:34.830 "aliases": [ 00:15:34.830 "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2" 00:15:34.830 ], 00:15:34.830 "product_name": "Raid Volume", 00:15:34.830 "block_size": 512, 00:15:34.830 "num_blocks": 126976, 00:15:34.830 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:34.830 "assigned_rate_limits": { 00:15:34.830 "rw_ios_per_sec": 0, 00:15:34.830 "rw_mbytes_per_sec": 0, 00:15:34.830 "r_mbytes_per_sec": 0, 00:15:34.830 "w_mbytes_per_sec": 0 00:15:34.830 }, 00:15:34.830 "claimed": false, 00:15:34.830 "zoned": false, 00:15:34.830 "supported_io_types": { 00:15:34.830 "read": true, 00:15:34.830 "write": true, 00:15:34.830 "unmap": false, 00:15:34.830 "flush": false, 00:15:34.830 "reset": true, 00:15:34.830 "nvme_admin": false, 00:15:34.830 "nvme_io": false, 00:15:34.830 "nvme_io_md": false, 00:15:34.830 "write_zeroes": true, 00:15:34.830 "zcopy": false, 00:15:34.830 "get_zone_info": false, 00:15:34.830 "zone_management": false, 00:15:34.830 "zone_append": false, 00:15:34.830 "compare": false, 00:15:34.830 "compare_and_write": false, 00:15:34.831 "abort": false, 00:15:34.831 "seek_hole": false, 00:15:34.831 "seek_data": false, 00:15:34.831 "copy": false, 00:15:34.831 "nvme_iov_md": false 00:15:34.831 }, 00:15:34.831 "driver_specific": { 00:15:34.831 "raid": { 00:15:34.831 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:34.831 "strip_size_kb": 64, 00:15:34.831 "state": "online", 00:15:34.831 "raid_level": "raid5f", 00:15:34.831 "superblock": true, 00:15:34.831 "num_base_bdevs": 3, 00:15:34.831 "num_base_bdevs_discovered": 3, 00:15:34.831 "num_base_bdevs_operational": 3, 00:15:34.831 "base_bdevs_list": [ 00:15:34.831 { 00:15:34.831 "name": "pt1", 00:15:34.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.831 "is_configured": true, 00:15:34.831 "data_offset": 2048, 00:15:34.831 "data_size": 63488 00:15:34.831 }, 00:15:34.831 { 00:15:34.831 "name": "pt2", 00:15:34.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.831 "is_configured": true, 00:15:34.831 "data_offset": 2048, 00:15:34.831 "data_size": 63488 00:15:34.831 }, 00:15:34.831 { 00:15:34.831 "name": "pt3", 00:15:34.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.831 "is_configured": true, 00:15:34.831 "data_offset": 2048, 00:15:34.831 "data_size": 63488 00:15:34.831 } 00:15:34.831 ] 00:15:34.831 } 00:15:34.831 } 00:15:34.831 }' 00:15:34.831 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.831 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:34.831 pt2 00:15:34.831 pt3' 00:15:34.831 16:12:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:34.831 [2024-12-12 16:12:01.163159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.831 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.090 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2 '!=' 41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2 ']' 00:15:35.090 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:35.090 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:35.090 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:35.090 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:35.090 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.091 [2024-12-12 16:12:01.210969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.091 "name": "raid_bdev1", 00:15:35.091 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:35.091 "strip_size_kb": 64, 00:15:35.091 "state": "online", 00:15:35.091 "raid_level": "raid5f", 00:15:35.091 "superblock": true, 00:15:35.091 "num_base_bdevs": 3, 00:15:35.091 "num_base_bdevs_discovered": 2, 00:15:35.091 "num_base_bdevs_operational": 2, 00:15:35.091 "base_bdevs_list": [ 00:15:35.091 { 00:15:35.091 "name": null, 00:15:35.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.091 "is_configured": false, 00:15:35.091 "data_offset": 0, 00:15:35.091 "data_size": 63488 00:15:35.091 }, 00:15:35.091 { 00:15:35.091 "name": "pt2", 00:15:35.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.091 "is_configured": true, 00:15:35.091 "data_offset": 2048, 00:15:35.091 "data_size": 63488 00:15:35.091 }, 00:15:35.091 { 00:15:35.091 "name": "pt3", 00:15:35.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.091 "is_configured": true, 00:15:35.091 "data_offset": 2048, 00:15:35.091 "data_size": 63488 00:15:35.091 } 00:15:35.091 ] 00:15:35.091 }' 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.091 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.349 [2024-12-12 16:12:01.682163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.349 [2024-12-12 16:12:01.682307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.349 [2024-12-12 16:12:01.682438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.349 [2024-12-12 16:12:01.682533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.349 [2024-12-12 16:12:01.682602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:35.349 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 [2024-12-12 16:12:01.770017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:35.608 [2024-12-12 16:12:01.770190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.608 [2024-12-12 16:12:01.770238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:35.608 [2024-12-12 16:12:01.770286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.608 [2024-12-12 16:12:01.772910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.608 [2024-12-12 16:12:01.773005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:35.608 [2024-12-12 16:12:01.773141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:35.608 [2024-12-12 16:12:01.773236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.608 pt2 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.608 "name": "raid_bdev1", 00:15:35.608 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:35.608 "strip_size_kb": 64, 00:15:35.608 "state": "configuring", 00:15:35.608 "raid_level": "raid5f", 00:15:35.608 "superblock": true, 00:15:35.608 "num_base_bdevs": 3, 00:15:35.608 "num_base_bdevs_discovered": 1, 00:15:35.608 "num_base_bdevs_operational": 2, 00:15:35.608 "base_bdevs_list": [ 00:15:35.608 { 00:15:35.608 "name": null, 00:15:35.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.608 "is_configured": false, 00:15:35.608 "data_offset": 2048, 00:15:35.608 "data_size": 63488 00:15:35.608 }, 00:15:35.608 { 00:15:35.608 "name": "pt2", 00:15:35.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.608 "is_configured": true, 00:15:35.608 "data_offset": 2048, 00:15:35.608 "data_size": 63488 00:15:35.608 }, 00:15:35.608 { 00:15:35.608 "name": null, 00:15:35.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.608 "is_configured": false, 00:15:35.608 "data_offset": 2048, 00:15:35.608 "data_size": 63488 00:15:35.608 } 00:15:35.608 ] 00:15:35.608 }' 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.608 16:12:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.176 [2024-12-12 16:12:02.261205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.176 [2024-12-12 16:12:02.261329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.176 [2024-12-12 16:12:02.261360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:36.176 [2024-12-12 16:12:02.261375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.176 [2024-12-12 16:12:02.262002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.176 [2024-12-12 16:12:02.262033] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.176 [2024-12-12 16:12:02.262151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:36.176 [2024-12-12 16:12:02.262187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.176 [2024-12-12 16:12:02.262337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:36.176 [2024-12-12 16:12:02.262351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.176 [2024-12-12 16:12:02.262655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:36.176 [2024-12-12 16:12:02.267991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:36.176 [2024-12-12 16:12:02.268104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:36.176 [2024-12-12 16:12:02.268484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.176 pt3 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.176 "name": "raid_bdev1", 00:15:36.176 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:36.176 "strip_size_kb": 64, 00:15:36.176 "state": "online", 00:15:36.176 "raid_level": "raid5f", 00:15:36.176 "superblock": true, 00:15:36.176 "num_base_bdevs": 3, 00:15:36.176 "num_base_bdevs_discovered": 2, 00:15:36.176 "num_base_bdevs_operational": 2, 00:15:36.176 "base_bdevs_list": [ 00:15:36.176 { 00:15:36.176 "name": null, 00:15:36.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.176 "is_configured": false, 00:15:36.176 "data_offset": 2048, 00:15:36.176 "data_size": 63488 00:15:36.176 }, 00:15:36.176 { 00:15:36.176 "name": "pt2", 00:15:36.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.176 "is_configured": true, 00:15:36.176 "data_offset": 2048, 00:15:36.176 "data_size": 63488 00:15:36.176 }, 00:15:36.176 { 00:15:36.176 "name": "pt3", 00:15:36.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.176 "is_configured": true, 00:15:36.176 "data_offset": 2048, 00:15:36.176 "data_size": 63488 00:15:36.176 } 00:15:36.176 ] 00:15:36.176 }' 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.176 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 [2024-12-12 16:12:02.674928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:36.436 [2024-12-12 16:12:02.675082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.436 [2024-12-12 16:12:02.675220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.436 [2024-12-12 16:12:02.675325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.436 [2024-12-12 16:12:02.675421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 [2024-12-12 16:12:02.746742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:36.436 [2024-12-12 16:12:02.746815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.436 [2024-12-12 16:12:02.746840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:36.436 [2024-12-12 16:12:02.746852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.436 [2024-12-12 16:12:02.749541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.436 [2024-12-12 16:12:02.749587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:36.436 [2024-12-12 16:12:02.749686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:36.436 [2024-12-12 16:12:02.749746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:36.436 [2024-12-12 16:12:02.749947] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:36.436 [2024-12-12 16:12:02.749963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:36.436 [2024-12-12 16:12:02.749981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:36.436 [2024-12-12 16:12:02.750046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.436 pt1 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.696 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.696 "name": "raid_bdev1", 00:15:36.696 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:36.696 "strip_size_kb": 64, 00:15:36.696 "state": "configuring", 00:15:36.696 "raid_level": "raid5f", 00:15:36.696 "superblock": true, 00:15:36.696 "num_base_bdevs": 3, 00:15:36.696 "num_base_bdevs_discovered": 1, 00:15:36.696 "num_base_bdevs_operational": 2, 00:15:36.696 "base_bdevs_list": [ 00:15:36.696 { 00:15:36.696 "name": null, 00:15:36.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.696 "is_configured": false, 00:15:36.696 "data_offset": 2048, 00:15:36.696 "data_size": 63488 00:15:36.696 }, 00:15:36.696 { 00:15:36.696 "name": "pt2", 00:15:36.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.696 "is_configured": true, 00:15:36.696 "data_offset": 2048, 00:15:36.696 "data_size": 63488 00:15:36.696 }, 00:15:36.696 { 00:15:36.696 "name": null, 00:15:36.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.696 "is_configured": false, 00:15:36.696 "data_offset": 2048, 00:15:36.696 "data_size": 63488 00:15:36.696 } 00:15:36.696 ] 00:15:36.696 }' 00:15:36.696 16:12:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.696 16:12:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.955 [2024-12-12 16:12:03.202015] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.955 [2024-12-12 16:12:03.202069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.955 [2024-12-12 16:12:03.202089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:36.955 [2024-12-12 16:12:03.202100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.955 [2024-12-12 16:12:03.202547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.955 [2024-12-12 16:12:03.202567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.955 [2024-12-12 16:12:03.202637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:36.955 [2024-12-12 16:12:03.202655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.955 [2024-12-12 16:12:03.202791] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:36.955 [2024-12-12 16:12:03.202801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.955 [2024-12-12 16:12:03.203093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:36.955 [2024-12-12 16:12:03.208507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:36.955 pt3 00:15:36.955 [2024-12-12 16:12:03.208603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:36.955 [2024-12-12 16:12:03.208867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.955 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.955 "name": "raid_bdev1", 00:15:36.955 "uuid": "41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2", 00:15:36.955 "strip_size_kb": 64, 00:15:36.955 "state": "online", 00:15:36.955 "raid_level": "raid5f", 00:15:36.955 "superblock": true, 00:15:36.955 "num_base_bdevs": 3, 00:15:36.955 "num_base_bdevs_discovered": 2, 00:15:36.956 "num_base_bdevs_operational": 2, 00:15:36.956 "base_bdevs_list": [ 00:15:36.956 { 00:15:36.956 "name": null, 00:15:36.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.956 "is_configured": false, 00:15:36.956 "data_offset": 2048, 00:15:36.956 "data_size": 63488 00:15:36.956 }, 00:15:36.956 { 00:15:36.956 "name": "pt2", 00:15:36.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.956 "is_configured": true, 00:15:36.956 "data_offset": 2048, 00:15:36.956 "data_size": 63488 00:15:36.956 }, 00:15:36.956 { 00:15:36.956 "name": "pt3", 00:15:36.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.956 "is_configured": true, 00:15:36.956 "data_offset": 2048, 00:15:36.956 "data_size": 63488 00:15:36.956 } 00:15:36.956 ] 00:15:36.956 }' 00:15:36.956 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.956 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.525 [2024-12-12 16:12:03.639380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2 '!=' 41d8dbb0-85e9-4b3f-ae77-d1337d2a54d2 ']' 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83215 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83215 ']' 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83215 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83215 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83215' 00:15:37.525 killing process with pid 83215 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83215 00:15:37.525 [2024-12-12 16:12:03.704922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.525 [2024-12-12 16:12:03.705105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.525 16:12:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83215 00:15:37.525 [2024-12-12 16:12:03.705210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.525 [2024-12-12 16:12:03.705277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:37.785 [2024-12-12 16:12:04.033916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.167 16:12:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:39.167 00:15:39.167 real 0m7.897s 00:15:39.167 user 0m12.116s 00:15:39.167 sys 0m1.467s 00:15:39.167 16:12:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.167 16:12:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.167 ************************************ 00:15:39.167 END TEST raid5f_superblock_test 00:15:39.167 ************************************ 00:15:39.167 16:12:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:39.167 16:12:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:39.167 16:12:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:39.167 16:12:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.167 16:12:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.167 ************************************ 00:15:39.167 START TEST raid5f_rebuild_test 00:15:39.167 ************************************ 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=83660 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 83660 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 83660 ']' 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.167 16:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.167 [2024-12-12 16:12:05.427859] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:39.167 [2024-12-12 16:12:05.428089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:39.167 Zero copy mechanism will not be used. 00:15:39.167 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83660 ] 00:15:39.427 [2024-12-12 16:12:05.603146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.427 [2024-12-12 16:12:05.730691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.686 [2024-12-12 16:12:05.969387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.686 [2024-12-12 16:12:05.969551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.946 BaseBdev1_malloc 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.946 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.207 [2024-12-12 16:12:06.300552] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:40.207 [2024-12-12 16:12:06.300642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.207 [2024-12-12 16:12:06.300670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:40.207 [2024-12-12 16:12:06.300685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.207 [2024-12-12 16:12:06.303090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.207 [2024-12-12 16:12:06.303139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.207 BaseBdev1 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.207 BaseBdev2_malloc 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.207 [2024-12-12 16:12:06.361847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:40.207 [2024-12-12 16:12:06.361932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.207 [2024-12-12 16:12:06.361955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:40.207 [2024-12-12 16:12:06.361970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.207 [2024-12-12 16:12:06.364395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.207 [2024-12-12 16:12:06.364441] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.207 BaseBdev2 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.207 BaseBdev3_malloc 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.207 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.207 [2024-12-12 16:12:06.444927] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:40.207 [2024-12-12 16:12:06.444982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.208 [2024-12-12 16:12:06.445008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:40.208 [2024-12-12 16:12:06.445023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.208 [2024-12-12 16:12:06.447327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.208 [2024-12-12 16:12:06.447374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:40.208 BaseBdev3 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.208 spare_malloc 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.208 spare_delay 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.208 [2024-12-12 16:12:06.519582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.208 [2024-12-12 16:12:06.519643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.208 [2024-12-12 16:12:06.519667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:40.208 [2024-12-12 16:12:06.519682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.208 [2024-12-12 16:12:06.522068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.208 [2024-12-12 16:12:06.522117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.208 spare 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.208 [2024-12-12 16:12:06.531640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.208 [2024-12-12 16:12:06.533656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.208 [2024-12-12 16:12:06.533730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.208 [2024-12-12 16:12:06.533822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:40.208 [2024-12-12 16:12:06.533834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:40.208 [2024-12-12 16:12:06.534127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:40.208 [2024-12-12 16:12:06.540084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:40.208 [2024-12-12 16:12:06.540129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:40.208 [2024-12-12 16:12:06.540326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.208 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.468 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.468 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.468 "name": "raid_bdev1", 00:15:40.469 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:40.469 "strip_size_kb": 64, 00:15:40.469 "state": "online", 00:15:40.469 "raid_level": "raid5f", 00:15:40.469 "superblock": false, 00:15:40.469 "num_base_bdevs": 3, 00:15:40.469 "num_base_bdevs_discovered": 3, 00:15:40.469 "num_base_bdevs_operational": 3, 00:15:40.469 "base_bdevs_list": [ 00:15:40.469 { 00:15:40.469 "name": "BaseBdev1", 00:15:40.469 "uuid": "129c2309-0242-5d18-9b66-2ca7bea54473", 00:15:40.469 "is_configured": true, 00:15:40.469 "data_offset": 0, 00:15:40.469 "data_size": 65536 00:15:40.469 }, 00:15:40.469 { 00:15:40.469 "name": "BaseBdev2", 00:15:40.469 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:40.469 "is_configured": true, 00:15:40.469 "data_offset": 0, 00:15:40.469 "data_size": 65536 00:15:40.469 }, 00:15:40.469 { 00:15:40.469 "name": "BaseBdev3", 00:15:40.469 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:40.469 "is_configured": true, 00:15:40.469 "data_offset": 0, 00:15:40.469 "data_size": 65536 00:15:40.469 } 00:15:40.469 ] 00:15:40.469 }' 00:15:40.469 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.469 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.728 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:40.728 16:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.728 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.728 16:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.728 [2024-12-12 16:12:06.986851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.728 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:40.988 [2024-12-12 16:12:07.246238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:40.988 /dev/nbd0 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.988 1+0 records in 00:15:40.988 1+0 records out 00:15:40.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444832 s, 9.2 MB/s 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:40.988 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:41.556 512+0 records in 00:15:41.556 512+0 records out 00:15:41.556 67108864 bytes (67 MB, 64 MiB) copied, 0.439253 s, 153 MB/s 00:15:41.556 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:41.556 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.556 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.556 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.556 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:41.556 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.556 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.816 [2024-12-12 16:12:07.975284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.816 [2024-12-12 16:12:07.994637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.816 16:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.816 "name": "raid_bdev1", 00:15:41.816 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:41.816 "strip_size_kb": 64, 00:15:41.816 "state": "online", 00:15:41.816 "raid_level": "raid5f", 00:15:41.816 "superblock": false, 00:15:41.816 "num_base_bdevs": 3, 00:15:41.816 "num_base_bdevs_discovered": 2, 00:15:41.816 "num_base_bdevs_operational": 2, 00:15:41.816 "base_bdevs_list": [ 00:15:41.816 { 00:15:41.816 "name": null, 00:15:41.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.816 "is_configured": false, 00:15:41.816 "data_offset": 0, 00:15:41.816 "data_size": 65536 00:15:41.816 }, 00:15:41.816 { 00:15:41.816 "name": "BaseBdev2", 00:15:41.816 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:41.816 "is_configured": true, 00:15:41.816 "data_offset": 0, 00:15:41.816 "data_size": 65536 00:15:41.816 }, 00:15:41.816 { 00:15:41.816 "name": "BaseBdev3", 00:15:41.816 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:41.816 "is_configured": true, 00:15:41.816 "data_offset": 0, 00:15:41.816 "data_size": 65536 00:15:41.816 } 00:15:41.816 ] 00:15:41.816 }' 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.816 16:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.076 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.076 16:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.076 16:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.076 [2024-12-12 16:12:08.346070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.076 [2024-12-12 16:12:08.364635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:42.076 16:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.076 16:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:42.076 [2024-12-12 16:12:08.372903] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.458 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.458 "name": "raid_bdev1", 00:15:43.458 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:43.458 "strip_size_kb": 64, 00:15:43.458 "state": "online", 00:15:43.458 "raid_level": "raid5f", 00:15:43.458 "superblock": false, 00:15:43.458 "num_base_bdevs": 3, 00:15:43.458 "num_base_bdevs_discovered": 3, 00:15:43.458 "num_base_bdevs_operational": 3, 00:15:43.458 "process": { 00:15:43.458 "type": "rebuild", 00:15:43.458 "target": "spare", 00:15:43.458 "progress": { 00:15:43.458 "blocks": 20480, 00:15:43.458 "percent": 15 00:15:43.459 } 00:15:43.459 }, 00:15:43.459 "base_bdevs_list": [ 00:15:43.459 { 00:15:43.459 "name": "spare", 00:15:43.459 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 0, 00:15:43.459 "data_size": 65536 00:15:43.459 }, 00:15:43.459 { 00:15:43.459 "name": "BaseBdev2", 00:15:43.459 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 0, 00:15:43.459 "data_size": 65536 00:15:43.459 }, 00:15:43.459 { 00:15:43.459 "name": "BaseBdev3", 00:15:43.459 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 0, 00:15:43.459 "data_size": 65536 00:15:43.459 } 00:15:43.459 ] 00:15:43.459 }' 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.459 [2024-12-12 16:12:09.532445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.459 [2024-12-12 16:12:09.584613] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.459 [2024-12-12 16:12:09.584686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.459 [2024-12-12 16:12:09.584710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.459 [2024-12-12 16:12:09.584721] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.459 "name": "raid_bdev1", 00:15:43.459 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:43.459 "strip_size_kb": 64, 00:15:43.459 "state": "online", 00:15:43.459 "raid_level": "raid5f", 00:15:43.459 "superblock": false, 00:15:43.459 "num_base_bdevs": 3, 00:15:43.459 "num_base_bdevs_discovered": 2, 00:15:43.459 "num_base_bdevs_operational": 2, 00:15:43.459 "base_bdevs_list": [ 00:15:43.459 { 00:15:43.459 "name": null, 00:15:43.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.459 "is_configured": false, 00:15:43.459 "data_offset": 0, 00:15:43.459 "data_size": 65536 00:15:43.459 }, 00:15:43.459 { 00:15:43.459 "name": "BaseBdev2", 00:15:43.459 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 0, 00:15:43.459 "data_size": 65536 00:15:43.459 }, 00:15:43.459 { 00:15:43.459 "name": "BaseBdev3", 00:15:43.459 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:43.459 "is_configured": true, 00:15:43.459 "data_offset": 0, 00:15:43.459 "data_size": 65536 00:15:43.459 } 00:15:43.459 ] 00:15:43.459 }' 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.459 16:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.028 "name": "raid_bdev1", 00:15:44.028 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:44.028 "strip_size_kb": 64, 00:15:44.028 "state": "online", 00:15:44.028 "raid_level": "raid5f", 00:15:44.028 "superblock": false, 00:15:44.028 "num_base_bdevs": 3, 00:15:44.028 "num_base_bdevs_discovered": 2, 00:15:44.028 "num_base_bdevs_operational": 2, 00:15:44.028 "base_bdevs_list": [ 00:15:44.028 { 00:15:44.028 "name": null, 00:15:44.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.028 "is_configured": false, 00:15:44.028 "data_offset": 0, 00:15:44.028 "data_size": 65536 00:15:44.028 }, 00:15:44.028 { 00:15:44.028 "name": "BaseBdev2", 00:15:44.028 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:44.028 "is_configured": true, 00:15:44.028 "data_offset": 0, 00:15:44.028 "data_size": 65536 00:15:44.028 }, 00:15:44.028 { 00:15:44.028 "name": "BaseBdev3", 00:15:44.028 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:44.028 "is_configured": true, 00:15:44.028 "data_offset": 0, 00:15:44.028 "data_size": 65536 00:15:44.028 } 00:15:44.028 ] 00:15:44.028 }' 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.028 [2024-12-12 16:12:10.242084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.028 [2024-12-12 16:12:10.257369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.028 16:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:44.028 [2024-12-12 16:12:10.264486] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.968 "name": "raid_bdev1", 00:15:44.968 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:44.968 "strip_size_kb": 64, 00:15:44.968 "state": "online", 00:15:44.968 "raid_level": "raid5f", 00:15:44.968 "superblock": false, 00:15:44.968 "num_base_bdevs": 3, 00:15:44.968 "num_base_bdevs_discovered": 3, 00:15:44.968 "num_base_bdevs_operational": 3, 00:15:44.968 "process": { 00:15:44.968 "type": "rebuild", 00:15:44.968 "target": "spare", 00:15:44.968 "progress": { 00:15:44.968 "blocks": 20480, 00:15:44.968 "percent": 15 00:15:44.968 } 00:15:44.968 }, 00:15:44.968 "base_bdevs_list": [ 00:15:44.968 { 00:15:44.968 "name": "spare", 00:15:44.968 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:44.968 "is_configured": true, 00:15:44.968 "data_offset": 0, 00:15:44.968 "data_size": 65536 00:15:44.968 }, 00:15:44.968 { 00:15:44.968 "name": "BaseBdev2", 00:15:44.968 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:44.968 "is_configured": true, 00:15:44.968 "data_offset": 0, 00:15:44.968 "data_size": 65536 00:15:44.968 }, 00:15:44.968 { 00:15:44.968 "name": "BaseBdev3", 00:15:44.968 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:44.968 "is_configured": true, 00:15:44.968 "data_offset": 0, 00:15:44.968 "data_size": 65536 00:15:44.968 } 00:15:44.968 ] 00:15:44.968 }' 00:15:44.968 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=559 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.228 "name": "raid_bdev1", 00:15:45.228 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:45.228 "strip_size_kb": 64, 00:15:45.228 "state": "online", 00:15:45.228 "raid_level": "raid5f", 00:15:45.228 "superblock": false, 00:15:45.228 "num_base_bdevs": 3, 00:15:45.228 "num_base_bdevs_discovered": 3, 00:15:45.228 "num_base_bdevs_operational": 3, 00:15:45.228 "process": { 00:15:45.228 "type": "rebuild", 00:15:45.228 "target": "spare", 00:15:45.228 "progress": { 00:15:45.228 "blocks": 22528, 00:15:45.228 "percent": 17 00:15:45.228 } 00:15:45.228 }, 00:15:45.228 "base_bdevs_list": [ 00:15:45.228 { 00:15:45.228 "name": "spare", 00:15:45.228 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:45.228 "is_configured": true, 00:15:45.228 "data_offset": 0, 00:15:45.228 "data_size": 65536 00:15:45.228 }, 00:15:45.228 { 00:15:45.228 "name": "BaseBdev2", 00:15:45.228 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:45.228 "is_configured": true, 00:15:45.228 "data_offset": 0, 00:15:45.228 "data_size": 65536 00:15:45.228 }, 00:15:45.228 { 00:15:45.228 "name": "BaseBdev3", 00:15:45.228 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:45.228 "is_configured": true, 00:15:45.228 "data_offset": 0, 00:15:45.228 "data_size": 65536 00:15:45.228 } 00:15:45.228 ] 00:15:45.228 }' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.228 16:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.648 "name": "raid_bdev1", 00:15:46.648 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:46.648 "strip_size_kb": 64, 00:15:46.648 "state": "online", 00:15:46.648 "raid_level": "raid5f", 00:15:46.648 "superblock": false, 00:15:46.648 "num_base_bdevs": 3, 00:15:46.648 "num_base_bdevs_discovered": 3, 00:15:46.648 "num_base_bdevs_operational": 3, 00:15:46.648 "process": { 00:15:46.648 "type": "rebuild", 00:15:46.648 "target": "spare", 00:15:46.648 "progress": { 00:15:46.648 "blocks": 45056, 00:15:46.648 "percent": 34 00:15:46.648 } 00:15:46.648 }, 00:15:46.648 "base_bdevs_list": [ 00:15:46.648 { 00:15:46.648 "name": "spare", 00:15:46.648 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:46.648 "is_configured": true, 00:15:46.648 "data_offset": 0, 00:15:46.648 "data_size": 65536 00:15:46.648 }, 00:15:46.648 { 00:15:46.648 "name": "BaseBdev2", 00:15:46.648 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:46.648 "is_configured": true, 00:15:46.648 "data_offset": 0, 00:15:46.648 "data_size": 65536 00:15:46.648 }, 00:15:46.648 { 00:15:46.648 "name": "BaseBdev3", 00:15:46.648 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:46.648 "is_configured": true, 00:15:46.648 "data_offset": 0, 00:15:46.648 "data_size": 65536 00:15:46.648 } 00:15:46.648 ] 00:15:46.648 }' 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.648 16:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.586 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.586 "name": "raid_bdev1", 00:15:47.586 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:47.586 "strip_size_kb": 64, 00:15:47.586 "state": "online", 00:15:47.586 "raid_level": "raid5f", 00:15:47.586 "superblock": false, 00:15:47.586 "num_base_bdevs": 3, 00:15:47.586 "num_base_bdevs_discovered": 3, 00:15:47.586 "num_base_bdevs_operational": 3, 00:15:47.586 "process": { 00:15:47.586 "type": "rebuild", 00:15:47.586 "target": "spare", 00:15:47.586 "progress": { 00:15:47.586 "blocks": 69632, 00:15:47.586 "percent": 53 00:15:47.586 } 00:15:47.586 }, 00:15:47.586 "base_bdevs_list": [ 00:15:47.586 { 00:15:47.586 "name": "spare", 00:15:47.586 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:47.586 "is_configured": true, 00:15:47.586 "data_offset": 0, 00:15:47.586 "data_size": 65536 00:15:47.586 }, 00:15:47.586 { 00:15:47.586 "name": "BaseBdev2", 00:15:47.586 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:47.586 "is_configured": true, 00:15:47.586 "data_offset": 0, 00:15:47.587 "data_size": 65536 00:15:47.587 }, 00:15:47.587 { 00:15:47.587 "name": "BaseBdev3", 00:15:47.587 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:47.587 "is_configured": true, 00:15:47.587 "data_offset": 0, 00:15:47.587 "data_size": 65536 00:15:47.587 } 00:15:47.587 ] 00:15:47.587 }' 00:15:47.587 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.587 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.587 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.587 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.587 16:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.524 16:12:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.784 16:12:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.784 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.784 "name": "raid_bdev1", 00:15:48.784 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:48.784 "strip_size_kb": 64, 00:15:48.784 "state": "online", 00:15:48.784 "raid_level": "raid5f", 00:15:48.784 "superblock": false, 00:15:48.784 "num_base_bdevs": 3, 00:15:48.784 "num_base_bdevs_discovered": 3, 00:15:48.785 "num_base_bdevs_operational": 3, 00:15:48.785 "process": { 00:15:48.785 "type": "rebuild", 00:15:48.785 "target": "spare", 00:15:48.785 "progress": { 00:15:48.785 "blocks": 92160, 00:15:48.785 "percent": 70 00:15:48.785 } 00:15:48.785 }, 00:15:48.785 "base_bdevs_list": [ 00:15:48.785 { 00:15:48.785 "name": "spare", 00:15:48.785 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:48.785 "is_configured": true, 00:15:48.785 "data_offset": 0, 00:15:48.785 "data_size": 65536 00:15:48.785 }, 00:15:48.785 { 00:15:48.785 "name": "BaseBdev2", 00:15:48.785 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:48.785 "is_configured": true, 00:15:48.785 "data_offset": 0, 00:15:48.785 "data_size": 65536 00:15:48.785 }, 00:15:48.785 { 00:15:48.785 "name": "BaseBdev3", 00:15:48.785 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:48.785 "is_configured": true, 00:15:48.785 "data_offset": 0, 00:15:48.785 "data_size": 65536 00:15:48.785 } 00:15:48.785 ] 00:15:48.785 }' 00:15:48.785 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.785 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.785 16:12:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.785 16:12:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.785 16:12:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.724 "name": "raid_bdev1", 00:15:49.724 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:49.724 "strip_size_kb": 64, 00:15:49.724 "state": "online", 00:15:49.724 "raid_level": "raid5f", 00:15:49.724 "superblock": false, 00:15:49.724 "num_base_bdevs": 3, 00:15:49.724 "num_base_bdevs_discovered": 3, 00:15:49.724 "num_base_bdevs_operational": 3, 00:15:49.724 "process": { 00:15:49.724 "type": "rebuild", 00:15:49.724 "target": "spare", 00:15:49.724 "progress": { 00:15:49.724 "blocks": 116736, 00:15:49.724 "percent": 89 00:15:49.724 } 00:15:49.724 }, 00:15:49.724 "base_bdevs_list": [ 00:15:49.724 { 00:15:49.724 "name": "spare", 00:15:49.724 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:49.724 "is_configured": true, 00:15:49.724 "data_offset": 0, 00:15:49.724 "data_size": 65536 00:15:49.724 }, 00:15:49.724 { 00:15:49.724 "name": "BaseBdev2", 00:15:49.724 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:49.724 "is_configured": true, 00:15:49.724 "data_offset": 0, 00:15:49.724 "data_size": 65536 00:15:49.724 }, 00:15:49.724 { 00:15:49.724 "name": "BaseBdev3", 00:15:49.724 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:49.724 "is_configured": true, 00:15:49.724 "data_offset": 0, 00:15:49.724 "data_size": 65536 00:15:49.724 } 00:15:49.724 ] 00:15:49.724 }' 00:15:49.724 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.984 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.984 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.984 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.984 16:12:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.554 [2024-12-12 16:12:16.725686] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:50.554 [2024-12-12 16:12:16.725824] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:50.554 [2024-12-12 16:12:16.725889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.124 "name": "raid_bdev1", 00:15:51.124 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:51.124 "strip_size_kb": 64, 00:15:51.124 "state": "online", 00:15:51.124 "raid_level": "raid5f", 00:15:51.124 "superblock": false, 00:15:51.124 "num_base_bdevs": 3, 00:15:51.124 "num_base_bdevs_discovered": 3, 00:15:51.124 "num_base_bdevs_operational": 3, 00:15:51.124 "base_bdevs_list": [ 00:15:51.124 { 00:15:51.124 "name": "spare", 00:15:51.124 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 }, 00:15:51.124 { 00:15:51.124 "name": "BaseBdev2", 00:15:51.124 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 }, 00:15:51.124 { 00:15:51.124 "name": "BaseBdev3", 00:15:51.124 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 } 00:15:51.124 ] 00:15:51.124 }' 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.124 "name": "raid_bdev1", 00:15:51.124 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:51.124 "strip_size_kb": 64, 00:15:51.124 "state": "online", 00:15:51.124 "raid_level": "raid5f", 00:15:51.124 "superblock": false, 00:15:51.124 "num_base_bdevs": 3, 00:15:51.124 "num_base_bdevs_discovered": 3, 00:15:51.124 "num_base_bdevs_operational": 3, 00:15:51.124 "base_bdevs_list": [ 00:15:51.124 { 00:15:51.124 "name": "spare", 00:15:51.124 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 }, 00:15:51.124 { 00:15:51.124 "name": "BaseBdev2", 00:15:51.124 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 }, 00:15:51.124 { 00:15:51.124 "name": "BaseBdev3", 00:15:51.124 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 } 00:15:51.124 ] 00:15:51.124 }' 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.124 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.125 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.384 "name": "raid_bdev1", 00:15:51.384 "uuid": "8e1803de-c667-4574-a355-4c583151bf07", 00:15:51.384 "strip_size_kb": 64, 00:15:51.384 "state": "online", 00:15:51.384 "raid_level": "raid5f", 00:15:51.384 "superblock": false, 00:15:51.384 "num_base_bdevs": 3, 00:15:51.384 "num_base_bdevs_discovered": 3, 00:15:51.384 "num_base_bdevs_operational": 3, 00:15:51.384 "base_bdevs_list": [ 00:15:51.384 { 00:15:51.384 "name": "spare", 00:15:51.384 "uuid": "b3ea5dae-d7af-5410-99c6-5d308354d97d", 00:15:51.384 "is_configured": true, 00:15:51.384 "data_offset": 0, 00:15:51.384 "data_size": 65536 00:15:51.384 }, 00:15:51.384 { 00:15:51.384 "name": "BaseBdev2", 00:15:51.384 "uuid": "2b1c9a6a-00e6-551b-8208-f3cd7bc9733f", 00:15:51.384 "is_configured": true, 00:15:51.384 "data_offset": 0, 00:15:51.384 "data_size": 65536 00:15:51.384 }, 00:15:51.384 { 00:15:51.384 "name": "BaseBdev3", 00:15:51.384 "uuid": "953045c7-961c-5954-b7d2-1bca86bc78f9", 00:15:51.384 "is_configured": true, 00:15:51.384 "data_offset": 0, 00:15:51.384 "data_size": 65536 00:15:51.384 } 00:15:51.384 ] 00:15:51.384 }' 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.384 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.644 [2024-12-12 16:12:17.918132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.644 [2024-12-12 16:12:17.918283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.644 [2024-12-12 16:12:17.918426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.644 [2024-12-12 16:12:17.918553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.644 [2024-12-12 16:12:17.918616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.644 16:12:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:51.904 /dev/nbd0 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.904 1+0 records in 00:15:51.904 1+0 records out 00:15:51.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401479 s, 10.2 MB/s 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.904 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:52.164 /dev/nbd1 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.164 1+0 records in 00:15:52.164 1+0 records out 00:15:52.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248973 s, 16.5 MB/s 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.164 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:52.424 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:52.424 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.424 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:52.424 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.424 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:52.424 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.424 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.683 16:12:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 83660 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 83660 ']' 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 83660 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83660 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.943 killing process with pid 83660 00:15:52.943 Received shutdown signal, test time was about 60.000000 seconds 00:15:52.943 00:15:52.943 Latency(us) 00:15:52.943 [2024-12-12T16:12:19.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.943 [2024-12-12T16:12:19.295Z] =================================================================================================================== 00:15:52.943 [2024-12-12T16:12:19.295Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83660' 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 83660 00:15:52.943 [2024-12-12 16:12:19.149755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.943 16:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 83660 00:15:53.512 [2024-12-12 16:12:19.576427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.451 16:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:54.451 00:15:54.451 real 0m15.450s 00:15:54.451 user 0m18.677s 00:15:54.451 sys 0m2.223s 00:15:54.451 ************************************ 00:15:54.451 END TEST raid5f_rebuild_test 00:15:54.451 ************************************ 00:15:54.451 16:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.451 16:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.711 16:12:20 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:54.711 16:12:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:54.711 16:12:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.711 16:12:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.711 ************************************ 00:15:54.711 START TEST raid5f_rebuild_test_sb 00:15:54.711 ************************************ 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84103 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84103 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84103 ']' 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.711 16:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.711 [2024-12-12 16:12:20.966425] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:54.711 [2024-12-12 16:12:20.966665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.711 Zero copy mechanism will not be used. 00:15:54.711 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84103 ] 00:15:54.970 [2024-12-12 16:12:21.145210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.970 [2024-12-12 16:12:21.279100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.230 [2024-12-12 16:12:21.508050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.230 [2024-12-12 16:12:21.508203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.489 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.489 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:55.489 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.489 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.489 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.489 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 BaseBdev1_malloc 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 [2024-12-12 16:12:21.850199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.749 [2024-12-12 16:12:21.850372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.749 [2024-12-12 16:12:21.850404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.749 [2024-12-12 16:12:21.850419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.749 [2024-12-12 16:12:21.852854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.749 [2024-12-12 16:12:21.852914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.749 BaseBdev1 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 BaseBdev2_malloc 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 [2024-12-12 16:12:21.911112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.749 [2024-12-12 16:12:21.911255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.749 [2024-12-12 16:12:21.911281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.749 [2024-12-12 16:12:21.911298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.749 [2024-12-12 16:12:21.913666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.749 [2024-12-12 16:12:21.913713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.749 BaseBdev2 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 BaseBdev3_malloc 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 [2024-12-12 16:12:21.984545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:55.749 [2024-12-12 16:12:21.984687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.749 [2024-12-12 16:12:21.984717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.749 [2024-12-12 16:12:21.984731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.749 [2024-12-12 16:12:21.987117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.749 [2024-12-12 16:12:21.987164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.749 BaseBdev3 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 spare_malloc 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 spare_delay 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.749 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 [2024-12-12 16:12:22.058005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.749 [2024-12-12 16:12:22.058066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.750 [2024-12-12 16:12:22.058090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:55.750 [2024-12-12 16:12:22.058104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.750 [2024-12-12 16:12:22.060448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.750 [2024-12-12 16:12:22.060495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.750 spare 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.750 [2024-12-12 16:12:22.070077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.750 [2024-12-12 16:12:22.072124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.750 [2024-12-12 16:12:22.072197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.750 [2024-12-12 16:12:22.072384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.750 [2024-12-12 16:12:22.072397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:55.750 [2024-12-12 16:12:22.072642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:55.750 [2024-12-12 16:12:22.077916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.750 [2024-12-12 16:12:22.077988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.750 [2024-12-12 16:12:22.078231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.750 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.010 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.010 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.010 "name": "raid_bdev1", 00:15:56.010 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:15:56.010 "strip_size_kb": 64, 00:15:56.010 "state": "online", 00:15:56.010 "raid_level": "raid5f", 00:15:56.010 "superblock": true, 00:15:56.010 "num_base_bdevs": 3, 00:15:56.010 "num_base_bdevs_discovered": 3, 00:15:56.010 "num_base_bdevs_operational": 3, 00:15:56.010 "base_bdevs_list": [ 00:15:56.010 { 00:15:56.010 "name": "BaseBdev1", 00:15:56.010 "uuid": "9093f8c3-1f8e-5b64-82ae-49bd05721701", 00:15:56.010 "is_configured": true, 00:15:56.010 "data_offset": 2048, 00:15:56.010 "data_size": 63488 00:15:56.010 }, 00:15:56.010 { 00:15:56.010 "name": "BaseBdev2", 00:15:56.010 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:15:56.010 "is_configured": true, 00:15:56.010 "data_offset": 2048, 00:15:56.010 "data_size": 63488 00:15:56.010 }, 00:15:56.010 { 00:15:56.010 "name": "BaseBdev3", 00:15:56.010 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:15:56.010 "is_configured": true, 00:15:56.010 "data_offset": 2048, 00:15:56.010 "data_size": 63488 00:15:56.010 } 00:15:56.010 ] 00:15:56.010 }' 00:15:56.010 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.010 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 [2024-12-12 16:12:22.532415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:56.270 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:56.529 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:56.529 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:56.530 [2024-12-12 16:12:22.807860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:56.530 /dev/nbd0 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.530 1+0 records in 00:15:56.530 1+0 records out 00:15:56.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366664 s, 11.2 MB/s 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:56.530 16:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:57.098 496+0 records in 00:15:57.098 496+0 records out 00:15:57.098 65011712 bytes (65 MB, 62 MiB) copied, 0.399995 s, 163 MB/s 00:15:57.098 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:57.098 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.098 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:57.098 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.098 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:57.098 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.098 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:57.360 [2024-12-12 16:12:23.480012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.360 [2024-12-12 16:12:23.511261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.360 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.361 "name": "raid_bdev1", 00:15:57.361 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:15:57.361 "strip_size_kb": 64, 00:15:57.361 "state": "online", 00:15:57.361 "raid_level": "raid5f", 00:15:57.361 "superblock": true, 00:15:57.361 "num_base_bdevs": 3, 00:15:57.361 "num_base_bdevs_discovered": 2, 00:15:57.361 "num_base_bdevs_operational": 2, 00:15:57.361 "base_bdevs_list": [ 00:15:57.361 { 00:15:57.361 "name": null, 00:15:57.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.361 "is_configured": false, 00:15:57.361 "data_offset": 0, 00:15:57.361 "data_size": 63488 00:15:57.361 }, 00:15:57.361 { 00:15:57.361 "name": "BaseBdev2", 00:15:57.361 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:15:57.361 "is_configured": true, 00:15:57.361 "data_offset": 2048, 00:15:57.361 "data_size": 63488 00:15:57.361 }, 00:15:57.361 { 00:15:57.361 "name": "BaseBdev3", 00:15:57.361 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:15:57.361 "is_configured": true, 00:15:57.361 "data_offset": 2048, 00:15:57.361 "data_size": 63488 00:15:57.361 } 00:15:57.361 ] 00:15:57.361 }' 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.361 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.628 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.628 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.628 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.628 [2024-12-12 16:12:23.958529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.896 [2024-12-12 16:12:23.976438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:57.896 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.896 16:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.896 [2024-12-12 16:12:23.984479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.837 16:12:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.837 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.837 "name": "raid_bdev1", 00:15:58.837 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:15:58.837 "strip_size_kb": 64, 00:15:58.837 "state": "online", 00:15:58.837 "raid_level": "raid5f", 00:15:58.837 "superblock": true, 00:15:58.837 "num_base_bdevs": 3, 00:15:58.837 "num_base_bdevs_discovered": 3, 00:15:58.837 "num_base_bdevs_operational": 3, 00:15:58.837 "process": { 00:15:58.837 "type": "rebuild", 00:15:58.837 "target": "spare", 00:15:58.837 "progress": { 00:15:58.837 "blocks": 20480, 00:15:58.837 "percent": 16 00:15:58.837 } 00:15:58.837 }, 00:15:58.837 "base_bdevs_list": [ 00:15:58.837 { 00:15:58.837 "name": "spare", 00:15:58.837 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:15:58.837 "is_configured": true, 00:15:58.837 "data_offset": 2048, 00:15:58.837 "data_size": 63488 00:15:58.837 }, 00:15:58.837 { 00:15:58.837 "name": "BaseBdev2", 00:15:58.837 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:15:58.837 "is_configured": true, 00:15:58.837 "data_offset": 2048, 00:15:58.837 "data_size": 63488 00:15:58.838 }, 00:15:58.838 { 00:15:58.838 "name": "BaseBdev3", 00:15:58.838 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:15:58.838 "is_configured": true, 00:15:58.838 "data_offset": 2048, 00:15:58.838 "data_size": 63488 00:15:58.838 } 00:15:58.838 ] 00:15:58.838 }' 00:15:58.838 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.838 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.838 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.838 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.838 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.838 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.838 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.838 [2024-12-12 16:12:25.139592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.097 [2024-12-12 16:12:25.194682] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:59.097 [2024-12-12 16:12:25.194804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.097 [2024-12-12 16:12:25.194829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.097 [2024-12-12 16:12:25.194840] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.097 "name": "raid_bdev1", 00:15:59.097 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:15:59.097 "strip_size_kb": 64, 00:15:59.097 "state": "online", 00:15:59.097 "raid_level": "raid5f", 00:15:59.097 "superblock": true, 00:15:59.097 "num_base_bdevs": 3, 00:15:59.097 "num_base_bdevs_discovered": 2, 00:15:59.097 "num_base_bdevs_operational": 2, 00:15:59.097 "base_bdevs_list": [ 00:15:59.097 { 00:15:59.097 "name": null, 00:15:59.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.097 "is_configured": false, 00:15:59.097 "data_offset": 0, 00:15:59.097 "data_size": 63488 00:15:59.097 }, 00:15:59.097 { 00:15:59.097 "name": "BaseBdev2", 00:15:59.097 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:15:59.097 "is_configured": true, 00:15:59.097 "data_offset": 2048, 00:15:59.097 "data_size": 63488 00:15:59.097 }, 00:15:59.097 { 00:15:59.097 "name": "BaseBdev3", 00:15:59.097 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:15:59.097 "is_configured": true, 00:15:59.097 "data_offset": 2048, 00:15:59.097 "data_size": 63488 00:15:59.097 } 00:15:59.097 ] 00:15:59.097 }' 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.097 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.356 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.357 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.357 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.357 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.357 "name": "raid_bdev1", 00:15:59.357 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:15:59.357 "strip_size_kb": 64, 00:15:59.357 "state": "online", 00:15:59.357 "raid_level": "raid5f", 00:15:59.357 "superblock": true, 00:15:59.357 "num_base_bdevs": 3, 00:15:59.357 "num_base_bdevs_discovered": 2, 00:15:59.357 "num_base_bdevs_operational": 2, 00:15:59.357 "base_bdevs_list": [ 00:15:59.357 { 00:15:59.357 "name": null, 00:15:59.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.357 "is_configured": false, 00:15:59.357 "data_offset": 0, 00:15:59.357 "data_size": 63488 00:15:59.357 }, 00:15:59.357 { 00:15:59.357 "name": "BaseBdev2", 00:15:59.357 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:15:59.357 "is_configured": true, 00:15:59.357 "data_offset": 2048, 00:15:59.357 "data_size": 63488 00:15:59.357 }, 00:15:59.357 { 00:15:59.357 "name": "BaseBdev3", 00:15:59.357 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:15:59.357 "is_configured": true, 00:15:59.357 "data_offset": 2048, 00:15:59.357 "data_size": 63488 00:15:59.357 } 00:15:59.357 ] 00:15:59.357 }' 00:15:59.357 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.617 [2024-12-12 16:12:25.800746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.617 [2024-12-12 16:12:25.816360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.617 16:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.617 [2024-12-12 16:12:25.823500] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.556 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.556 "name": "raid_bdev1", 00:16:00.556 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:00.556 "strip_size_kb": 64, 00:16:00.556 "state": "online", 00:16:00.556 "raid_level": "raid5f", 00:16:00.556 "superblock": true, 00:16:00.556 "num_base_bdevs": 3, 00:16:00.556 "num_base_bdevs_discovered": 3, 00:16:00.556 "num_base_bdevs_operational": 3, 00:16:00.556 "process": { 00:16:00.556 "type": "rebuild", 00:16:00.556 "target": "spare", 00:16:00.556 "progress": { 00:16:00.556 "blocks": 20480, 00:16:00.556 "percent": 16 00:16:00.556 } 00:16:00.556 }, 00:16:00.556 "base_bdevs_list": [ 00:16:00.556 { 00:16:00.556 "name": "spare", 00:16:00.556 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:00.556 "is_configured": true, 00:16:00.556 "data_offset": 2048, 00:16:00.556 "data_size": 63488 00:16:00.556 }, 00:16:00.556 { 00:16:00.556 "name": "BaseBdev2", 00:16:00.556 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:00.556 "is_configured": true, 00:16:00.556 "data_offset": 2048, 00:16:00.556 "data_size": 63488 00:16:00.556 }, 00:16:00.556 { 00:16:00.556 "name": "BaseBdev3", 00:16:00.556 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:00.557 "is_configured": true, 00:16:00.557 "data_offset": 2048, 00:16:00.557 "data_size": 63488 00:16:00.557 } 00:16:00.557 ] 00:16:00.557 }' 00:16:00.557 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.815 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.815 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.815 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:00.816 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=574 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.816 16:12:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.816 16:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.816 "name": "raid_bdev1", 00:16:00.816 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:00.816 "strip_size_kb": 64, 00:16:00.816 "state": "online", 00:16:00.816 "raid_level": "raid5f", 00:16:00.816 "superblock": true, 00:16:00.816 "num_base_bdevs": 3, 00:16:00.816 "num_base_bdevs_discovered": 3, 00:16:00.816 "num_base_bdevs_operational": 3, 00:16:00.816 "process": { 00:16:00.816 "type": "rebuild", 00:16:00.816 "target": "spare", 00:16:00.816 "progress": { 00:16:00.816 "blocks": 22528, 00:16:00.816 "percent": 17 00:16:00.816 } 00:16:00.816 }, 00:16:00.816 "base_bdevs_list": [ 00:16:00.816 { 00:16:00.816 "name": "spare", 00:16:00.816 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:00.816 "is_configured": true, 00:16:00.816 "data_offset": 2048, 00:16:00.816 "data_size": 63488 00:16:00.816 }, 00:16:00.816 { 00:16:00.816 "name": "BaseBdev2", 00:16:00.816 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:00.816 "is_configured": true, 00:16:00.816 "data_offset": 2048, 00:16:00.816 "data_size": 63488 00:16:00.816 }, 00:16:00.816 { 00:16:00.816 "name": "BaseBdev3", 00:16:00.816 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:00.816 "is_configured": true, 00:16:00.816 "data_offset": 2048, 00:16:00.816 "data_size": 63488 00:16:00.816 } 00:16:00.816 ] 00:16:00.816 }' 00:16:00.816 16:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.816 16:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.816 16:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.816 16:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.816 16:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.197 "name": "raid_bdev1", 00:16:02.197 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:02.197 "strip_size_kb": 64, 00:16:02.197 "state": "online", 00:16:02.197 "raid_level": "raid5f", 00:16:02.197 "superblock": true, 00:16:02.197 "num_base_bdevs": 3, 00:16:02.197 "num_base_bdevs_discovered": 3, 00:16:02.197 "num_base_bdevs_operational": 3, 00:16:02.197 "process": { 00:16:02.197 "type": "rebuild", 00:16:02.197 "target": "spare", 00:16:02.197 "progress": { 00:16:02.197 "blocks": 45056, 00:16:02.197 "percent": 35 00:16:02.197 } 00:16:02.197 }, 00:16:02.197 "base_bdevs_list": [ 00:16:02.197 { 00:16:02.197 "name": "spare", 00:16:02.197 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:02.197 "is_configured": true, 00:16:02.197 "data_offset": 2048, 00:16:02.197 "data_size": 63488 00:16:02.197 }, 00:16:02.197 { 00:16:02.197 "name": "BaseBdev2", 00:16:02.197 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:02.197 "is_configured": true, 00:16:02.197 "data_offset": 2048, 00:16:02.197 "data_size": 63488 00:16:02.197 }, 00:16:02.197 { 00:16:02.197 "name": "BaseBdev3", 00:16:02.197 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:02.197 "is_configured": true, 00:16:02.197 "data_offset": 2048, 00:16:02.197 "data_size": 63488 00:16:02.197 } 00:16:02.197 ] 00:16:02.197 }' 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.197 16:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.137 "name": "raid_bdev1", 00:16:03.137 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:03.137 "strip_size_kb": 64, 00:16:03.137 "state": "online", 00:16:03.137 "raid_level": "raid5f", 00:16:03.137 "superblock": true, 00:16:03.137 "num_base_bdevs": 3, 00:16:03.137 "num_base_bdevs_discovered": 3, 00:16:03.137 "num_base_bdevs_operational": 3, 00:16:03.137 "process": { 00:16:03.137 "type": "rebuild", 00:16:03.137 "target": "spare", 00:16:03.137 "progress": { 00:16:03.137 "blocks": 69632, 00:16:03.137 "percent": 54 00:16:03.137 } 00:16:03.137 }, 00:16:03.137 "base_bdevs_list": [ 00:16:03.137 { 00:16:03.137 "name": "spare", 00:16:03.137 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:03.137 "is_configured": true, 00:16:03.137 "data_offset": 2048, 00:16:03.137 "data_size": 63488 00:16:03.137 }, 00:16:03.137 { 00:16:03.137 "name": "BaseBdev2", 00:16:03.137 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:03.137 "is_configured": true, 00:16:03.137 "data_offset": 2048, 00:16:03.137 "data_size": 63488 00:16:03.137 }, 00:16:03.137 { 00:16:03.137 "name": "BaseBdev3", 00:16:03.137 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:03.137 "is_configured": true, 00:16:03.137 "data_offset": 2048, 00:16:03.137 "data_size": 63488 00:16:03.137 } 00:16:03.137 ] 00:16:03.137 }' 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.137 16:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.518 "name": "raid_bdev1", 00:16:04.518 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:04.518 "strip_size_kb": 64, 00:16:04.518 "state": "online", 00:16:04.518 "raid_level": "raid5f", 00:16:04.518 "superblock": true, 00:16:04.518 "num_base_bdevs": 3, 00:16:04.518 "num_base_bdevs_discovered": 3, 00:16:04.518 "num_base_bdevs_operational": 3, 00:16:04.518 "process": { 00:16:04.518 "type": "rebuild", 00:16:04.518 "target": "spare", 00:16:04.518 "progress": { 00:16:04.518 "blocks": 92160, 00:16:04.518 "percent": 72 00:16:04.518 } 00:16:04.518 }, 00:16:04.518 "base_bdevs_list": [ 00:16:04.518 { 00:16:04.518 "name": "spare", 00:16:04.518 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:04.518 "is_configured": true, 00:16:04.518 "data_offset": 2048, 00:16:04.518 "data_size": 63488 00:16:04.518 }, 00:16:04.518 { 00:16:04.518 "name": "BaseBdev2", 00:16:04.518 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:04.518 "is_configured": true, 00:16:04.518 "data_offset": 2048, 00:16:04.518 "data_size": 63488 00:16:04.518 }, 00:16:04.518 { 00:16:04.518 "name": "BaseBdev3", 00:16:04.518 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:04.518 "is_configured": true, 00:16:04.518 "data_offset": 2048, 00:16:04.518 "data_size": 63488 00:16:04.518 } 00:16:04.518 ] 00:16:04.518 }' 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.518 16:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.458 "name": "raid_bdev1", 00:16:05.458 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:05.458 "strip_size_kb": 64, 00:16:05.458 "state": "online", 00:16:05.458 "raid_level": "raid5f", 00:16:05.458 "superblock": true, 00:16:05.458 "num_base_bdevs": 3, 00:16:05.458 "num_base_bdevs_discovered": 3, 00:16:05.458 "num_base_bdevs_operational": 3, 00:16:05.458 "process": { 00:16:05.458 "type": "rebuild", 00:16:05.458 "target": "spare", 00:16:05.458 "progress": { 00:16:05.458 "blocks": 116736, 00:16:05.458 "percent": 91 00:16:05.458 } 00:16:05.458 }, 00:16:05.458 "base_bdevs_list": [ 00:16:05.458 { 00:16:05.458 "name": "spare", 00:16:05.458 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:05.458 "is_configured": true, 00:16:05.458 "data_offset": 2048, 00:16:05.458 "data_size": 63488 00:16:05.458 }, 00:16:05.458 { 00:16:05.458 "name": "BaseBdev2", 00:16:05.458 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:05.458 "is_configured": true, 00:16:05.458 "data_offset": 2048, 00:16:05.458 "data_size": 63488 00:16:05.458 }, 00:16:05.458 { 00:16:05.458 "name": "BaseBdev3", 00:16:05.458 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:05.458 "is_configured": true, 00:16:05.458 "data_offset": 2048, 00:16:05.458 "data_size": 63488 00:16:05.458 } 00:16:05.458 ] 00:16:05.458 }' 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.458 16:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.027 [2024-12-12 16:12:32.072369] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:06.027 [2024-12-12 16:12:32.072486] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:06.027 [2024-12-12 16:12:32.072617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.597 "name": "raid_bdev1", 00:16:06.597 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:06.597 "strip_size_kb": 64, 00:16:06.597 "state": "online", 00:16:06.597 "raid_level": "raid5f", 00:16:06.597 "superblock": true, 00:16:06.597 "num_base_bdevs": 3, 00:16:06.597 "num_base_bdevs_discovered": 3, 00:16:06.597 "num_base_bdevs_operational": 3, 00:16:06.597 "base_bdevs_list": [ 00:16:06.597 { 00:16:06.597 "name": "spare", 00:16:06.597 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:06.597 "is_configured": true, 00:16:06.597 "data_offset": 2048, 00:16:06.597 "data_size": 63488 00:16:06.597 }, 00:16:06.597 { 00:16:06.597 "name": "BaseBdev2", 00:16:06.597 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:06.597 "is_configured": true, 00:16:06.597 "data_offset": 2048, 00:16:06.597 "data_size": 63488 00:16:06.597 }, 00:16:06.597 { 00:16:06.597 "name": "BaseBdev3", 00:16:06.597 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:06.597 "is_configured": true, 00:16:06.597 "data_offset": 2048, 00:16:06.597 "data_size": 63488 00:16:06.597 } 00:16:06.597 ] 00:16:06.597 }' 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.597 "name": "raid_bdev1", 00:16:06.597 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:06.597 "strip_size_kb": 64, 00:16:06.597 "state": "online", 00:16:06.597 "raid_level": "raid5f", 00:16:06.597 "superblock": true, 00:16:06.597 "num_base_bdevs": 3, 00:16:06.597 "num_base_bdevs_discovered": 3, 00:16:06.597 "num_base_bdevs_operational": 3, 00:16:06.597 "base_bdevs_list": [ 00:16:06.597 { 00:16:06.597 "name": "spare", 00:16:06.597 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:06.597 "is_configured": true, 00:16:06.597 "data_offset": 2048, 00:16:06.597 "data_size": 63488 00:16:06.597 }, 00:16:06.597 { 00:16:06.597 "name": "BaseBdev2", 00:16:06.597 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:06.597 "is_configured": true, 00:16:06.597 "data_offset": 2048, 00:16:06.597 "data_size": 63488 00:16:06.597 }, 00:16:06.597 { 00:16:06.597 "name": "BaseBdev3", 00:16:06.597 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:06.597 "is_configured": true, 00:16:06.597 "data_offset": 2048, 00:16:06.597 "data_size": 63488 00:16:06.597 } 00:16:06.597 ] 00:16:06.597 }' 00:16:06.597 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.858 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.858 16:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.858 "name": "raid_bdev1", 00:16:06.858 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:06.858 "strip_size_kb": 64, 00:16:06.858 "state": "online", 00:16:06.858 "raid_level": "raid5f", 00:16:06.858 "superblock": true, 00:16:06.858 "num_base_bdevs": 3, 00:16:06.858 "num_base_bdevs_discovered": 3, 00:16:06.858 "num_base_bdevs_operational": 3, 00:16:06.858 "base_bdevs_list": [ 00:16:06.858 { 00:16:06.858 "name": "spare", 00:16:06.858 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:06.858 "is_configured": true, 00:16:06.858 "data_offset": 2048, 00:16:06.858 "data_size": 63488 00:16:06.858 }, 00:16:06.858 { 00:16:06.858 "name": "BaseBdev2", 00:16:06.858 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:06.858 "is_configured": true, 00:16:06.858 "data_offset": 2048, 00:16:06.858 "data_size": 63488 00:16:06.858 }, 00:16:06.858 { 00:16:06.858 "name": "BaseBdev3", 00:16:06.858 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:06.858 "is_configured": true, 00:16:06.858 "data_offset": 2048, 00:16:06.858 "data_size": 63488 00:16:06.858 } 00:16:06.858 ] 00:16:06.858 }' 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.858 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.118 [2024-12-12 16:12:33.432519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.118 [2024-12-12 16:12:33.432669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.118 [2024-12-12 16:12:33.432789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.118 [2024-12-12 16:12:33.432895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.118 [2024-12-12 16:12:33.432932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.118 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:07.378 /dev/nbd0 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.378 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.378 1+0 records in 00:16:07.378 1+0 records out 00:16:07.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063865 s, 6.4 MB/s 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:07.638 /dev/nbd1 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.638 1+0 records in 00:16:07.638 1+0 records out 00:16:07.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541244 s, 7.6 MB/s 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.638 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.639 16:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:07.898 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:07.898 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.898 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.898 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.898 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:07.898 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.898 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.158 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.418 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.418 [2024-12-12 16:12:34.663371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:08.418 [2024-12-12 16:12:34.663477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.418 [2024-12-12 16:12:34.663509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:08.419 [2024-12-12 16:12:34.663525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.419 [2024-12-12 16:12:34.666247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.419 [2024-12-12 16:12:34.666384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:08.419 [2024-12-12 16:12:34.666516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:08.419 [2024-12-12 16:12:34.666603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.419 [2024-12-12 16:12:34.666774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.419 [2024-12-12 16:12:34.666895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.419 spare 00:16:08.419 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.419 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:08.419 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.419 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.419 [2024-12-12 16:12:34.766858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:08.419 [2024-12-12 16:12:34.766958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:08.419 [2024-12-12 16:12:34.767429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:08.678 [2024-12-12 16:12:34.773215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:08.678 [2024-12-12 16:12:34.773243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:08.678 [2024-12-12 16:12:34.773520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.678 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.679 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.679 "name": "raid_bdev1", 00:16:08.679 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:08.679 "strip_size_kb": 64, 00:16:08.679 "state": "online", 00:16:08.679 "raid_level": "raid5f", 00:16:08.679 "superblock": true, 00:16:08.679 "num_base_bdevs": 3, 00:16:08.679 "num_base_bdevs_discovered": 3, 00:16:08.679 "num_base_bdevs_operational": 3, 00:16:08.679 "base_bdevs_list": [ 00:16:08.679 { 00:16:08.679 "name": "spare", 00:16:08.679 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:08.679 "is_configured": true, 00:16:08.679 "data_offset": 2048, 00:16:08.679 "data_size": 63488 00:16:08.679 }, 00:16:08.679 { 00:16:08.679 "name": "BaseBdev2", 00:16:08.679 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:08.679 "is_configured": true, 00:16:08.679 "data_offset": 2048, 00:16:08.679 "data_size": 63488 00:16:08.679 }, 00:16:08.679 { 00:16:08.679 "name": "BaseBdev3", 00:16:08.679 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:08.679 "is_configured": true, 00:16:08.679 "data_offset": 2048, 00:16:08.679 "data_size": 63488 00:16:08.679 } 00:16:08.679 ] 00:16:08.679 }' 00:16:08.679 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.679 16:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.942 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.202 "name": "raid_bdev1", 00:16:09.202 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:09.202 "strip_size_kb": 64, 00:16:09.202 "state": "online", 00:16:09.202 "raid_level": "raid5f", 00:16:09.202 "superblock": true, 00:16:09.202 "num_base_bdevs": 3, 00:16:09.202 "num_base_bdevs_discovered": 3, 00:16:09.202 "num_base_bdevs_operational": 3, 00:16:09.202 "base_bdevs_list": [ 00:16:09.202 { 00:16:09.202 "name": "spare", 00:16:09.202 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:09.202 "is_configured": true, 00:16:09.202 "data_offset": 2048, 00:16:09.202 "data_size": 63488 00:16:09.202 }, 00:16:09.202 { 00:16:09.202 "name": "BaseBdev2", 00:16:09.202 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:09.202 "is_configured": true, 00:16:09.202 "data_offset": 2048, 00:16:09.202 "data_size": 63488 00:16:09.202 }, 00:16:09.202 { 00:16:09.202 "name": "BaseBdev3", 00:16:09.202 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:09.202 "is_configured": true, 00:16:09.202 "data_offset": 2048, 00:16:09.202 "data_size": 63488 00:16:09.202 } 00:16:09.202 ] 00:16:09.202 }' 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.202 [2024-12-12 16:12:35.427778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.202 "name": "raid_bdev1", 00:16:09.202 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:09.202 "strip_size_kb": 64, 00:16:09.202 "state": "online", 00:16:09.202 "raid_level": "raid5f", 00:16:09.202 "superblock": true, 00:16:09.202 "num_base_bdevs": 3, 00:16:09.202 "num_base_bdevs_discovered": 2, 00:16:09.202 "num_base_bdevs_operational": 2, 00:16:09.202 "base_bdevs_list": [ 00:16:09.202 { 00:16:09.202 "name": null, 00:16:09.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.202 "is_configured": false, 00:16:09.202 "data_offset": 0, 00:16:09.202 "data_size": 63488 00:16:09.202 }, 00:16:09.202 { 00:16:09.202 "name": "BaseBdev2", 00:16:09.202 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:09.202 "is_configured": true, 00:16:09.202 "data_offset": 2048, 00:16:09.202 "data_size": 63488 00:16:09.202 }, 00:16:09.202 { 00:16:09.202 "name": "BaseBdev3", 00:16:09.202 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:09.202 "is_configured": true, 00:16:09.202 "data_offset": 2048, 00:16:09.202 "data_size": 63488 00:16:09.202 } 00:16:09.202 ] 00:16:09.202 }' 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.202 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.771 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.771 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.771 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.771 [2024-12-12 16:12:35.879120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.771 [2024-12-12 16:12:35.879586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.771 [2024-12-12 16:12:35.879681] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.771 [2024-12-12 16:12:35.879782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.771 [2024-12-12 16:12:35.897207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:09.771 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.771 16:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:09.771 [2024-12-12 16:12:35.904782] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.711 "name": "raid_bdev1", 00:16:10.711 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:10.711 "strip_size_kb": 64, 00:16:10.711 "state": "online", 00:16:10.711 "raid_level": "raid5f", 00:16:10.711 "superblock": true, 00:16:10.711 "num_base_bdevs": 3, 00:16:10.711 "num_base_bdevs_discovered": 3, 00:16:10.711 "num_base_bdevs_operational": 3, 00:16:10.711 "process": { 00:16:10.711 "type": "rebuild", 00:16:10.711 "target": "spare", 00:16:10.711 "progress": { 00:16:10.711 "blocks": 18432, 00:16:10.711 "percent": 14 00:16:10.711 } 00:16:10.711 }, 00:16:10.711 "base_bdevs_list": [ 00:16:10.711 { 00:16:10.711 "name": "spare", 00:16:10.711 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:10.711 "is_configured": true, 00:16:10.711 "data_offset": 2048, 00:16:10.711 "data_size": 63488 00:16:10.711 }, 00:16:10.711 { 00:16:10.711 "name": "BaseBdev2", 00:16:10.711 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:10.711 "is_configured": true, 00:16:10.711 "data_offset": 2048, 00:16:10.711 "data_size": 63488 00:16:10.711 }, 00:16:10.711 { 00:16:10.711 "name": "BaseBdev3", 00:16:10.711 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:10.711 "is_configured": true, 00:16:10.711 "data_offset": 2048, 00:16:10.711 "data_size": 63488 00:16:10.711 } 00:16:10.711 ] 00:16:10.711 }' 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.711 16:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.711 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.711 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.711 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.711 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.711 [2024-12-12 16:12:37.016211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.711 [2024-12-12 16:12:37.018576] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.711 [2024-12-12 16:12:37.018778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.711 [2024-12-12 16:12:37.018804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.711 [2024-12-12 16:12:37.018832] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.976 "name": "raid_bdev1", 00:16:10.976 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:10.976 "strip_size_kb": 64, 00:16:10.976 "state": "online", 00:16:10.976 "raid_level": "raid5f", 00:16:10.976 "superblock": true, 00:16:10.976 "num_base_bdevs": 3, 00:16:10.976 "num_base_bdevs_discovered": 2, 00:16:10.976 "num_base_bdevs_operational": 2, 00:16:10.976 "base_bdevs_list": [ 00:16:10.976 { 00:16:10.976 "name": null, 00:16:10.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.976 "is_configured": false, 00:16:10.976 "data_offset": 0, 00:16:10.976 "data_size": 63488 00:16:10.976 }, 00:16:10.976 { 00:16:10.976 "name": "BaseBdev2", 00:16:10.976 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:10.976 "is_configured": true, 00:16:10.976 "data_offset": 2048, 00:16:10.976 "data_size": 63488 00:16:10.976 }, 00:16:10.976 { 00:16:10.976 "name": "BaseBdev3", 00:16:10.976 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:10.976 "is_configured": true, 00:16:10.976 "data_offset": 2048, 00:16:10.976 "data_size": 63488 00:16:10.976 } 00:16:10.976 ] 00:16:10.976 }' 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.976 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.245 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.245 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 [2024-12-12 16:12:37.465911] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.245 [2024-12-12 16:12:37.466128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.245 [2024-12-12 16:12:37.466187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:11.245 [2024-12-12 16:12:37.466230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.245 [2024-12-12 16:12:37.466925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.245 [2024-12-12 16:12:37.467015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.245 [2024-12-12 16:12:37.467189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:11.245 [2024-12-12 16:12:37.467248] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.245 [2024-12-12 16:12:37.467303] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:11.245 [2024-12-12 16:12:37.467401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.245 [2024-12-12 16:12:37.484320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:11.245 spare 00:16:11.245 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.245 16:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:11.245 [2024-12-12 16:12:37.492057] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.184 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.444 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.444 "name": "raid_bdev1", 00:16:12.444 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:12.444 "strip_size_kb": 64, 00:16:12.444 "state": "online", 00:16:12.444 "raid_level": "raid5f", 00:16:12.444 "superblock": true, 00:16:12.444 "num_base_bdevs": 3, 00:16:12.444 "num_base_bdevs_discovered": 3, 00:16:12.444 "num_base_bdevs_operational": 3, 00:16:12.444 "process": { 00:16:12.444 "type": "rebuild", 00:16:12.444 "target": "spare", 00:16:12.444 "progress": { 00:16:12.444 "blocks": 20480, 00:16:12.444 "percent": 16 00:16:12.444 } 00:16:12.444 }, 00:16:12.444 "base_bdevs_list": [ 00:16:12.444 { 00:16:12.444 "name": "spare", 00:16:12.444 "uuid": "d99d8332-9e56-56ec-9ae9-575bce21c1f1", 00:16:12.444 "is_configured": true, 00:16:12.444 "data_offset": 2048, 00:16:12.444 "data_size": 63488 00:16:12.444 }, 00:16:12.444 { 00:16:12.444 "name": "BaseBdev2", 00:16:12.444 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:12.444 "is_configured": true, 00:16:12.444 "data_offset": 2048, 00:16:12.444 "data_size": 63488 00:16:12.444 }, 00:16:12.444 { 00:16:12.444 "name": "BaseBdev3", 00:16:12.444 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:12.444 "is_configured": true, 00:16:12.444 "data_offset": 2048, 00:16:12.444 "data_size": 63488 00:16:12.444 } 00:16:12.444 ] 00:16:12.444 }' 00:16:12.444 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.445 [2024-12-12 16:12:38.643818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.445 [2024-12-12 16:12:38.703321] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.445 [2024-12-12 16:12:38.703397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.445 [2024-12-12 16:12:38.703422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.445 [2024-12-12 16:12:38.703433] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.445 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.704 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.704 "name": "raid_bdev1", 00:16:12.704 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:12.704 "strip_size_kb": 64, 00:16:12.704 "state": "online", 00:16:12.704 "raid_level": "raid5f", 00:16:12.704 "superblock": true, 00:16:12.704 "num_base_bdevs": 3, 00:16:12.704 "num_base_bdevs_discovered": 2, 00:16:12.704 "num_base_bdevs_operational": 2, 00:16:12.704 "base_bdevs_list": [ 00:16:12.704 { 00:16:12.704 "name": null, 00:16:12.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.704 "is_configured": false, 00:16:12.704 "data_offset": 0, 00:16:12.704 "data_size": 63488 00:16:12.704 }, 00:16:12.704 { 00:16:12.704 "name": "BaseBdev2", 00:16:12.704 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:12.704 "is_configured": true, 00:16:12.704 "data_offset": 2048, 00:16:12.704 "data_size": 63488 00:16:12.704 }, 00:16:12.704 { 00:16:12.704 "name": "BaseBdev3", 00:16:12.704 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:12.704 "is_configured": true, 00:16:12.704 "data_offset": 2048, 00:16:12.704 "data_size": 63488 00:16:12.704 } 00:16:12.704 ] 00:16:12.704 }' 00:16:12.704 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.704 16:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.963 "name": "raid_bdev1", 00:16:12.963 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:12.963 "strip_size_kb": 64, 00:16:12.963 "state": "online", 00:16:12.963 "raid_level": "raid5f", 00:16:12.963 "superblock": true, 00:16:12.963 "num_base_bdevs": 3, 00:16:12.963 "num_base_bdevs_discovered": 2, 00:16:12.963 "num_base_bdevs_operational": 2, 00:16:12.963 "base_bdevs_list": [ 00:16:12.963 { 00:16:12.963 "name": null, 00:16:12.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.963 "is_configured": false, 00:16:12.963 "data_offset": 0, 00:16:12.963 "data_size": 63488 00:16:12.963 }, 00:16:12.963 { 00:16:12.963 "name": "BaseBdev2", 00:16:12.963 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:12.963 "is_configured": true, 00:16:12.963 "data_offset": 2048, 00:16:12.963 "data_size": 63488 00:16:12.963 }, 00:16:12.963 { 00:16:12.963 "name": "BaseBdev3", 00:16:12.963 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:12.963 "is_configured": true, 00:16:12.963 "data_offset": 2048, 00:16:12.963 "data_size": 63488 00:16:12.963 } 00:16:12.963 ] 00:16:12.963 }' 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.963 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.223 [2024-12-12 16:12:39.348842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:13.223 [2024-12-12 16:12:39.349034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.223 [2024-12-12 16:12:39.349080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:13.223 [2024-12-12 16:12:39.349093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.223 [2024-12-12 16:12:39.349693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.223 [2024-12-12 16:12:39.349718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:13.223 [2024-12-12 16:12:39.349833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:13.223 [2024-12-12 16:12:39.349856] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.223 [2024-12-12 16:12:39.349869] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.223 [2024-12-12 16:12:39.349913] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:13.223 BaseBdev1 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.223 16:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.161 "name": "raid_bdev1", 00:16:14.161 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:14.161 "strip_size_kb": 64, 00:16:14.161 "state": "online", 00:16:14.161 "raid_level": "raid5f", 00:16:14.161 "superblock": true, 00:16:14.161 "num_base_bdevs": 3, 00:16:14.161 "num_base_bdevs_discovered": 2, 00:16:14.161 "num_base_bdevs_operational": 2, 00:16:14.161 "base_bdevs_list": [ 00:16:14.161 { 00:16:14.161 "name": null, 00:16:14.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.161 "is_configured": false, 00:16:14.161 "data_offset": 0, 00:16:14.161 "data_size": 63488 00:16:14.161 }, 00:16:14.161 { 00:16:14.161 "name": "BaseBdev2", 00:16:14.161 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:14.161 "is_configured": true, 00:16:14.161 "data_offset": 2048, 00:16:14.161 "data_size": 63488 00:16:14.161 }, 00:16:14.161 { 00:16:14.161 "name": "BaseBdev3", 00:16:14.161 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:14.161 "is_configured": true, 00:16:14.161 "data_offset": 2048, 00:16:14.161 "data_size": 63488 00:16:14.161 } 00:16:14.161 ] 00:16:14.161 }' 00:16:14.161 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.162 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.731 "name": "raid_bdev1", 00:16:14.731 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:14.731 "strip_size_kb": 64, 00:16:14.731 "state": "online", 00:16:14.731 "raid_level": "raid5f", 00:16:14.731 "superblock": true, 00:16:14.731 "num_base_bdevs": 3, 00:16:14.731 "num_base_bdevs_discovered": 2, 00:16:14.731 "num_base_bdevs_operational": 2, 00:16:14.731 "base_bdevs_list": [ 00:16:14.731 { 00:16:14.731 "name": null, 00:16:14.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.731 "is_configured": false, 00:16:14.731 "data_offset": 0, 00:16:14.731 "data_size": 63488 00:16:14.731 }, 00:16:14.731 { 00:16:14.731 "name": "BaseBdev2", 00:16:14.731 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:14.731 "is_configured": true, 00:16:14.731 "data_offset": 2048, 00:16:14.731 "data_size": 63488 00:16:14.731 }, 00:16:14.731 { 00:16:14.731 "name": "BaseBdev3", 00:16:14.731 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:14.731 "is_configured": true, 00:16:14.731 "data_offset": 2048, 00:16:14.731 "data_size": 63488 00:16:14.731 } 00:16:14.731 ] 00:16:14.731 }' 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.731 [2024-12-12 16:12:40.938763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.731 [2024-12-12 16:12:40.939017] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:14.731 [2024-12-12 16:12:40.939037] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:14.731 request: 00:16:14.731 { 00:16:14.731 "base_bdev": "BaseBdev1", 00:16:14.731 "raid_bdev": "raid_bdev1", 00:16:14.731 "method": "bdev_raid_add_base_bdev", 00:16:14.731 "req_id": 1 00:16:14.731 } 00:16:14.731 Got JSON-RPC error response 00:16:14.731 response: 00:16:14.731 { 00:16:14.731 "code": -22, 00:16:14.731 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:14.731 } 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:14.731 16:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.671 16:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.671 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.671 "name": "raid_bdev1", 00:16:15.671 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:15.671 "strip_size_kb": 64, 00:16:15.671 "state": "online", 00:16:15.671 "raid_level": "raid5f", 00:16:15.671 "superblock": true, 00:16:15.671 "num_base_bdevs": 3, 00:16:15.671 "num_base_bdevs_discovered": 2, 00:16:15.671 "num_base_bdevs_operational": 2, 00:16:15.671 "base_bdevs_list": [ 00:16:15.671 { 00:16:15.671 "name": null, 00:16:15.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.671 "is_configured": false, 00:16:15.671 "data_offset": 0, 00:16:15.671 "data_size": 63488 00:16:15.671 }, 00:16:15.671 { 00:16:15.671 "name": "BaseBdev2", 00:16:15.671 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:15.671 "is_configured": true, 00:16:15.671 "data_offset": 2048, 00:16:15.671 "data_size": 63488 00:16:15.671 }, 00:16:15.671 { 00:16:15.671 "name": "BaseBdev3", 00:16:15.671 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:15.671 "is_configured": true, 00:16:15.671 "data_offset": 2048, 00:16:15.671 "data_size": 63488 00:16:15.671 } 00:16:15.671 ] 00:16:15.671 }' 00:16:15.671 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.671 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.240 "name": "raid_bdev1", 00:16:16.240 "uuid": "92d9ee36-a52f-4a74-a2df-121c6954c230", 00:16:16.240 "strip_size_kb": 64, 00:16:16.240 "state": "online", 00:16:16.240 "raid_level": "raid5f", 00:16:16.240 "superblock": true, 00:16:16.240 "num_base_bdevs": 3, 00:16:16.240 "num_base_bdevs_discovered": 2, 00:16:16.240 "num_base_bdevs_operational": 2, 00:16:16.240 "base_bdevs_list": [ 00:16:16.240 { 00:16:16.240 "name": null, 00:16:16.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.240 "is_configured": false, 00:16:16.240 "data_offset": 0, 00:16:16.240 "data_size": 63488 00:16:16.240 }, 00:16:16.240 { 00:16:16.240 "name": "BaseBdev2", 00:16:16.240 "uuid": "49476007-0b9f-57b4-894d-8859a2e46162", 00:16:16.240 "is_configured": true, 00:16:16.240 "data_offset": 2048, 00:16:16.240 "data_size": 63488 00:16:16.240 }, 00:16:16.240 { 00:16:16.240 "name": "BaseBdev3", 00:16:16.240 "uuid": "54e1c561-e5bb-525f-ae49-892be85f005f", 00:16:16.240 "is_configured": true, 00:16:16.240 "data_offset": 2048, 00:16:16.240 "data_size": 63488 00:16:16.240 } 00:16:16.240 ] 00:16:16.240 }' 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84103 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84103 ']' 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84103 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84103 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.240 killing process with pid 84103 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84103' 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84103 00:16:16.240 Received shutdown signal, test time was about 60.000000 seconds 00:16:16.240 00:16:16.240 Latency(us) 00:16:16.240 [2024-12-12T16:12:42.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.240 [2024-12-12T16:12:42.592Z] =================================================================================================================== 00:16:16.240 [2024-12-12T16:12:42.592Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:16.240 [2024-12-12 16:12:42.541806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.240 16:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84103 00:16:16.240 [2024-12-12 16:12:42.542000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.240 [2024-12-12 16:12:42.542083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.240 [2024-12-12 16:12:42.542099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:16.809 [2024-12-12 16:12:42.975176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.189 16:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:18.189 00:16:18.189 real 0m23.316s 00:16:18.189 user 0m29.520s 00:16:18.189 sys 0m2.900s 00:16:18.189 16:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.189 ************************************ 00:16:18.189 END TEST raid5f_rebuild_test_sb 00:16:18.189 ************************************ 00:16:18.189 16:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.189 16:12:44 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:18.189 16:12:44 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:18.189 16:12:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:18.189 16:12:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.189 16:12:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.189 ************************************ 00:16:18.189 START TEST raid5f_state_function_test 00:16:18.189 ************************************ 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84859 00:16:18.189 Process raid pid: 84859 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84859' 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84859 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84859 ']' 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.189 16:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.189 [2024-12-12 16:12:44.360068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:18.189 [2024-12-12 16:12:44.360858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.448 [2024-12-12 16:12:44.541063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.448 [2024-12-12 16:12:44.676167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.708 [2024-12-12 16:12:44.920147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.708 [2024-12-12 16:12:44.920290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 [2024-12-12 16:12:45.208714] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.967 [2024-12-12 16:12:45.208909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.967 [2024-12-12 16:12:45.208962] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.967 [2024-12-12 16:12:45.208994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.967 [2024-12-12 16:12:45.209018] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.967 [2024-12-12 16:12:45.209047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.967 [2024-12-12 16:12:45.209080] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:18.967 [2024-12-12 16:12:45.209111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.967 "name": "Existed_Raid", 00:16:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.967 "strip_size_kb": 64, 00:16:18.967 "state": "configuring", 00:16:18.967 "raid_level": "raid5f", 00:16:18.967 "superblock": false, 00:16:18.967 "num_base_bdevs": 4, 00:16:18.967 "num_base_bdevs_discovered": 0, 00:16:18.967 "num_base_bdevs_operational": 4, 00:16:18.967 "base_bdevs_list": [ 00:16:18.967 { 00:16:18.967 "name": "BaseBdev1", 00:16:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.967 "is_configured": false, 00:16:18.967 "data_offset": 0, 00:16:18.967 "data_size": 0 00:16:18.967 }, 00:16:18.967 { 00:16:18.967 "name": "BaseBdev2", 00:16:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.967 "is_configured": false, 00:16:18.967 "data_offset": 0, 00:16:18.967 "data_size": 0 00:16:18.967 }, 00:16:18.967 { 00:16:18.967 "name": "BaseBdev3", 00:16:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.967 "is_configured": false, 00:16:18.967 "data_offset": 0, 00:16:18.967 "data_size": 0 00:16:18.967 }, 00:16:18.967 { 00:16:18.967 "name": "BaseBdev4", 00:16:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.967 "is_configured": false, 00:16:18.967 "data_offset": 0, 00:16:18.967 "data_size": 0 00:16:18.967 } 00:16:18.967 ] 00:16:18.967 }' 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.967 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.535 [2024-12-12 16:12:45.660101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.535 [2024-12-12 16:12:45.660174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.535 [2024-12-12 16:12:45.671986] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.535 [2024-12-12 16:12:45.672044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.535 [2024-12-12 16:12:45.672055] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.535 [2024-12-12 16:12:45.672067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.535 [2024-12-12 16:12:45.672075] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.535 [2024-12-12 16:12:45.672087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.535 [2024-12-12 16:12:45.672095] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:19.535 [2024-12-12 16:12:45.672106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.535 [2024-12-12 16:12:45.727237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.535 BaseBdev1 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.535 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.535 [ 00:16:19.535 { 00:16:19.535 "name": "BaseBdev1", 00:16:19.536 "aliases": [ 00:16:19.536 "0740d8a9-99ae-4a16-aa4b-da388c9aa96c" 00:16:19.536 ], 00:16:19.536 "product_name": "Malloc disk", 00:16:19.536 "block_size": 512, 00:16:19.536 "num_blocks": 65536, 00:16:19.536 "uuid": "0740d8a9-99ae-4a16-aa4b-da388c9aa96c", 00:16:19.536 "assigned_rate_limits": { 00:16:19.536 "rw_ios_per_sec": 0, 00:16:19.536 "rw_mbytes_per_sec": 0, 00:16:19.536 "r_mbytes_per_sec": 0, 00:16:19.536 "w_mbytes_per_sec": 0 00:16:19.536 }, 00:16:19.536 "claimed": true, 00:16:19.536 "claim_type": "exclusive_write", 00:16:19.536 "zoned": false, 00:16:19.536 "supported_io_types": { 00:16:19.536 "read": true, 00:16:19.536 "write": true, 00:16:19.536 "unmap": true, 00:16:19.536 "flush": true, 00:16:19.536 "reset": true, 00:16:19.536 "nvme_admin": false, 00:16:19.536 "nvme_io": false, 00:16:19.536 "nvme_io_md": false, 00:16:19.536 "write_zeroes": true, 00:16:19.536 "zcopy": true, 00:16:19.536 "get_zone_info": false, 00:16:19.536 "zone_management": false, 00:16:19.536 "zone_append": false, 00:16:19.536 "compare": false, 00:16:19.536 "compare_and_write": false, 00:16:19.536 "abort": true, 00:16:19.536 "seek_hole": false, 00:16:19.536 "seek_data": false, 00:16:19.536 "copy": true, 00:16:19.536 "nvme_iov_md": false 00:16:19.536 }, 00:16:19.536 "memory_domains": [ 00:16:19.536 { 00:16:19.536 "dma_device_id": "system", 00:16:19.536 "dma_device_type": 1 00:16:19.536 }, 00:16:19.536 { 00:16:19.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.536 "dma_device_type": 2 00:16:19.536 } 00:16:19.536 ], 00:16:19.536 "driver_specific": {} 00:16:19.536 } 00:16:19.536 ] 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.536 "name": "Existed_Raid", 00:16:19.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.536 "strip_size_kb": 64, 00:16:19.536 "state": "configuring", 00:16:19.536 "raid_level": "raid5f", 00:16:19.536 "superblock": false, 00:16:19.536 "num_base_bdevs": 4, 00:16:19.536 "num_base_bdevs_discovered": 1, 00:16:19.536 "num_base_bdevs_operational": 4, 00:16:19.536 "base_bdevs_list": [ 00:16:19.536 { 00:16:19.536 "name": "BaseBdev1", 00:16:19.536 "uuid": "0740d8a9-99ae-4a16-aa4b-da388c9aa96c", 00:16:19.536 "is_configured": true, 00:16:19.536 "data_offset": 0, 00:16:19.536 "data_size": 65536 00:16:19.536 }, 00:16:19.536 { 00:16:19.536 "name": "BaseBdev2", 00:16:19.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.536 "is_configured": false, 00:16:19.536 "data_offset": 0, 00:16:19.536 "data_size": 0 00:16:19.536 }, 00:16:19.536 { 00:16:19.536 "name": "BaseBdev3", 00:16:19.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.536 "is_configured": false, 00:16:19.536 "data_offset": 0, 00:16:19.536 "data_size": 0 00:16:19.536 }, 00:16:19.536 { 00:16:19.536 "name": "BaseBdev4", 00:16:19.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.536 "is_configured": false, 00:16:19.536 "data_offset": 0, 00:16:19.536 "data_size": 0 00:16:19.536 } 00:16:19.536 ] 00:16:19.536 }' 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.536 16:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.105 [2024-12-12 16:12:46.174462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.105 [2024-12-12 16:12:46.174588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.105 [2024-12-12 16:12:46.182523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.105 [2024-12-12 16:12:46.184658] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.105 [2024-12-12 16:12:46.184754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.105 [2024-12-12 16:12:46.184789] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.105 [2024-12-12 16:12:46.184820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.105 [2024-12-12 16:12:46.184843] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:20.105 [2024-12-12 16:12:46.184869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.105 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.106 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.106 "name": "Existed_Raid", 00:16:20.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.106 "strip_size_kb": 64, 00:16:20.106 "state": "configuring", 00:16:20.106 "raid_level": "raid5f", 00:16:20.106 "superblock": false, 00:16:20.106 "num_base_bdevs": 4, 00:16:20.106 "num_base_bdevs_discovered": 1, 00:16:20.106 "num_base_bdevs_operational": 4, 00:16:20.106 "base_bdevs_list": [ 00:16:20.106 { 00:16:20.106 "name": "BaseBdev1", 00:16:20.106 "uuid": "0740d8a9-99ae-4a16-aa4b-da388c9aa96c", 00:16:20.106 "is_configured": true, 00:16:20.106 "data_offset": 0, 00:16:20.106 "data_size": 65536 00:16:20.106 }, 00:16:20.106 { 00:16:20.106 "name": "BaseBdev2", 00:16:20.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.106 "is_configured": false, 00:16:20.106 "data_offset": 0, 00:16:20.106 "data_size": 0 00:16:20.106 }, 00:16:20.106 { 00:16:20.106 "name": "BaseBdev3", 00:16:20.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.106 "is_configured": false, 00:16:20.106 "data_offset": 0, 00:16:20.106 "data_size": 0 00:16:20.106 }, 00:16:20.106 { 00:16:20.106 "name": "BaseBdev4", 00:16:20.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.106 "is_configured": false, 00:16:20.106 "data_offset": 0, 00:16:20.106 "data_size": 0 00:16:20.106 } 00:16:20.106 ] 00:16:20.106 }' 00:16:20.106 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.106 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.365 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:20.365 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.366 [2024-12-12 16:12:46.691802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.366 BaseBdev2 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.366 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.625 [ 00:16:20.625 { 00:16:20.625 "name": "BaseBdev2", 00:16:20.625 "aliases": [ 00:16:20.625 "51bc45ee-20e5-4e7c-8ab4-f6fb852578e7" 00:16:20.625 ], 00:16:20.625 "product_name": "Malloc disk", 00:16:20.625 "block_size": 512, 00:16:20.625 "num_blocks": 65536, 00:16:20.625 "uuid": "51bc45ee-20e5-4e7c-8ab4-f6fb852578e7", 00:16:20.625 "assigned_rate_limits": { 00:16:20.625 "rw_ios_per_sec": 0, 00:16:20.625 "rw_mbytes_per_sec": 0, 00:16:20.625 "r_mbytes_per_sec": 0, 00:16:20.625 "w_mbytes_per_sec": 0 00:16:20.625 }, 00:16:20.625 "claimed": true, 00:16:20.625 "claim_type": "exclusive_write", 00:16:20.625 "zoned": false, 00:16:20.625 "supported_io_types": { 00:16:20.625 "read": true, 00:16:20.625 "write": true, 00:16:20.625 "unmap": true, 00:16:20.625 "flush": true, 00:16:20.625 "reset": true, 00:16:20.625 "nvme_admin": false, 00:16:20.625 "nvme_io": false, 00:16:20.625 "nvme_io_md": false, 00:16:20.625 "write_zeroes": true, 00:16:20.625 "zcopy": true, 00:16:20.625 "get_zone_info": false, 00:16:20.625 "zone_management": false, 00:16:20.625 "zone_append": false, 00:16:20.625 "compare": false, 00:16:20.625 "compare_and_write": false, 00:16:20.625 "abort": true, 00:16:20.625 "seek_hole": false, 00:16:20.625 "seek_data": false, 00:16:20.625 "copy": true, 00:16:20.625 "nvme_iov_md": false 00:16:20.625 }, 00:16:20.625 "memory_domains": [ 00:16:20.625 { 00:16:20.625 "dma_device_id": "system", 00:16:20.625 "dma_device_type": 1 00:16:20.625 }, 00:16:20.625 { 00:16:20.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.625 "dma_device_type": 2 00:16:20.625 } 00:16:20.625 ], 00:16:20.625 "driver_specific": {} 00:16:20.625 } 00:16:20.625 ] 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.625 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.626 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.626 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.626 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.626 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.626 "name": "Existed_Raid", 00:16:20.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.626 "strip_size_kb": 64, 00:16:20.626 "state": "configuring", 00:16:20.626 "raid_level": "raid5f", 00:16:20.626 "superblock": false, 00:16:20.626 "num_base_bdevs": 4, 00:16:20.626 "num_base_bdevs_discovered": 2, 00:16:20.626 "num_base_bdevs_operational": 4, 00:16:20.626 "base_bdevs_list": [ 00:16:20.626 { 00:16:20.626 "name": "BaseBdev1", 00:16:20.626 "uuid": "0740d8a9-99ae-4a16-aa4b-da388c9aa96c", 00:16:20.626 "is_configured": true, 00:16:20.626 "data_offset": 0, 00:16:20.626 "data_size": 65536 00:16:20.626 }, 00:16:20.626 { 00:16:20.626 "name": "BaseBdev2", 00:16:20.626 "uuid": "51bc45ee-20e5-4e7c-8ab4-f6fb852578e7", 00:16:20.626 "is_configured": true, 00:16:20.626 "data_offset": 0, 00:16:20.626 "data_size": 65536 00:16:20.626 }, 00:16:20.626 { 00:16:20.626 "name": "BaseBdev3", 00:16:20.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.626 "is_configured": false, 00:16:20.626 "data_offset": 0, 00:16:20.626 "data_size": 0 00:16:20.626 }, 00:16:20.626 { 00:16:20.626 "name": "BaseBdev4", 00:16:20.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.626 "is_configured": false, 00:16:20.626 "data_offset": 0, 00:16:20.626 "data_size": 0 00:16:20.626 } 00:16:20.626 ] 00:16:20.626 }' 00:16:20.626 16:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.626 16:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.885 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.885 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.885 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.145 [2024-12-12 16:12:47.263998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.145 BaseBdev3 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.145 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.146 [ 00:16:21.146 { 00:16:21.146 "name": "BaseBdev3", 00:16:21.146 "aliases": [ 00:16:21.146 "48e435b1-f22f-4160-ab81-ecde6f16941e" 00:16:21.146 ], 00:16:21.146 "product_name": "Malloc disk", 00:16:21.146 "block_size": 512, 00:16:21.146 "num_blocks": 65536, 00:16:21.146 "uuid": "48e435b1-f22f-4160-ab81-ecde6f16941e", 00:16:21.146 "assigned_rate_limits": { 00:16:21.146 "rw_ios_per_sec": 0, 00:16:21.146 "rw_mbytes_per_sec": 0, 00:16:21.146 "r_mbytes_per_sec": 0, 00:16:21.146 "w_mbytes_per_sec": 0 00:16:21.146 }, 00:16:21.146 "claimed": true, 00:16:21.146 "claim_type": "exclusive_write", 00:16:21.146 "zoned": false, 00:16:21.146 "supported_io_types": { 00:16:21.146 "read": true, 00:16:21.146 "write": true, 00:16:21.146 "unmap": true, 00:16:21.146 "flush": true, 00:16:21.146 "reset": true, 00:16:21.146 "nvme_admin": false, 00:16:21.146 "nvme_io": false, 00:16:21.146 "nvme_io_md": false, 00:16:21.146 "write_zeroes": true, 00:16:21.146 "zcopy": true, 00:16:21.146 "get_zone_info": false, 00:16:21.146 "zone_management": false, 00:16:21.146 "zone_append": false, 00:16:21.146 "compare": false, 00:16:21.146 "compare_and_write": false, 00:16:21.146 "abort": true, 00:16:21.146 "seek_hole": false, 00:16:21.146 "seek_data": false, 00:16:21.146 "copy": true, 00:16:21.146 "nvme_iov_md": false 00:16:21.146 }, 00:16:21.146 "memory_domains": [ 00:16:21.146 { 00:16:21.146 "dma_device_id": "system", 00:16:21.146 "dma_device_type": 1 00:16:21.146 }, 00:16:21.146 { 00:16:21.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.146 "dma_device_type": 2 00:16:21.146 } 00:16:21.146 ], 00:16:21.146 "driver_specific": {} 00:16:21.146 } 00:16:21.146 ] 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.146 "name": "Existed_Raid", 00:16:21.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.146 "strip_size_kb": 64, 00:16:21.146 "state": "configuring", 00:16:21.146 "raid_level": "raid5f", 00:16:21.146 "superblock": false, 00:16:21.146 "num_base_bdevs": 4, 00:16:21.146 "num_base_bdevs_discovered": 3, 00:16:21.146 "num_base_bdevs_operational": 4, 00:16:21.146 "base_bdevs_list": [ 00:16:21.146 { 00:16:21.146 "name": "BaseBdev1", 00:16:21.146 "uuid": "0740d8a9-99ae-4a16-aa4b-da388c9aa96c", 00:16:21.146 "is_configured": true, 00:16:21.146 "data_offset": 0, 00:16:21.146 "data_size": 65536 00:16:21.146 }, 00:16:21.146 { 00:16:21.146 "name": "BaseBdev2", 00:16:21.146 "uuid": "51bc45ee-20e5-4e7c-8ab4-f6fb852578e7", 00:16:21.146 "is_configured": true, 00:16:21.146 "data_offset": 0, 00:16:21.146 "data_size": 65536 00:16:21.146 }, 00:16:21.146 { 00:16:21.146 "name": "BaseBdev3", 00:16:21.146 "uuid": "48e435b1-f22f-4160-ab81-ecde6f16941e", 00:16:21.146 "is_configured": true, 00:16:21.146 "data_offset": 0, 00:16:21.146 "data_size": 65536 00:16:21.146 }, 00:16:21.146 { 00:16:21.146 "name": "BaseBdev4", 00:16:21.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.146 "is_configured": false, 00:16:21.146 "data_offset": 0, 00:16:21.146 "data_size": 0 00:16:21.146 } 00:16:21.146 ] 00:16:21.146 }' 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.146 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.405 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:21.405 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.405 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.664 [2024-12-12 16:12:47.787008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:21.664 [2024-12-12 16:12:47.787099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:21.664 [2024-12-12 16:12:47.787110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:21.664 [2024-12-12 16:12:47.787410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:21.664 [2024-12-12 16:12:47.794081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:21.664 [2024-12-12 16:12:47.794111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:21.664 [2024-12-12 16:12:47.794421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.664 BaseBdev4 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.664 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.665 [ 00:16:21.665 { 00:16:21.665 "name": "BaseBdev4", 00:16:21.665 "aliases": [ 00:16:21.665 "9d03eab6-7fc3-462f-973d-a37da3575a0a" 00:16:21.665 ], 00:16:21.665 "product_name": "Malloc disk", 00:16:21.665 "block_size": 512, 00:16:21.665 "num_blocks": 65536, 00:16:21.665 "uuid": "9d03eab6-7fc3-462f-973d-a37da3575a0a", 00:16:21.665 "assigned_rate_limits": { 00:16:21.665 "rw_ios_per_sec": 0, 00:16:21.665 "rw_mbytes_per_sec": 0, 00:16:21.665 "r_mbytes_per_sec": 0, 00:16:21.665 "w_mbytes_per_sec": 0 00:16:21.665 }, 00:16:21.665 "claimed": true, 00:16:21.665 "claim_type": "exclusive_write", 00:16:21.665 "zoned": false, 00:16:21.665 "supported_io_types": { 00:16:21.665 "read": true, 00:16:21.665 "write": true, 00:16:21.665 "unmap": true, 00:16:21.665 "flush": true, 00:16:21.665 "reset": true, 00:16:21.665 "nvme_admin": false, 00:16:21.665 "nvme_io": false, 00:16:21.665 "nvme_io_md": false, 00:16:21.665 "write_zeroes": true, 00:16:21.665 "zcopy": true, 00:16:21.665 "get_zone_info": false, 00:16:21.665 "zone_management": false, 00:16:21.665 "zone_append": false, 00:16:21.665 "compare": false, 00:16:21.665 "compare_and_write": false, 00:16:21.665 "abort": true, 00:16:21.665 "seek_hole": false, 00:16:21.665 "seek_data": false, 00:16:21.665 "copy": true, 00:16:21.665 "nvme_iov_md": false 00:16:21.665 }, 00:16:21.665 "memory_domains": [ 00:16:21.665 { 00:16:21.665 "dma_device_id": "system", 00:16:21.665 "dma_device_type": 1 00:16:21.665 }, 00:16:21.665 { 00:16:21.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.665 "dma_device_type": 2 00:16:21.665 } 00:16:21.665 ], 00:16:21.665 "driver_specific": {} 00:16:21.665 } 00:16:21.665 ] 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.665 "name": "Existed_Raid", 00:16:21.665 "uuid": "8c76ef45-1a99-4b4c-a67e-0aaba99bbd6d", 00:16:21.665 "strip_size_kb": 64, 00:16:21.665 "state": "online", 00:16:21.665 "raid_level": "raid5f", 00:16:21.665 "superblock": false, 00:16:21.665 "num_base_bdevs": 4, 00:16:21.665 "num_base_bdevs_discovered": 4, 00:16:21.665 "num_base_bdevs_operational": 4, 00:16:21.665 "base_bdevs_list": [ 00:16:21.665 { 00:16:21.665 "name": "BaseBdev1", 00:16:21.665 "uuid": "0740d8a9-99ae-4a16-aa4b-da388c9aa96c", 00:16:21.665 "is_configured": true, 00:16:21.665 "data_offset": 0, 00:16:21.665 "data_size": 65536 00:16:21.665 }, 00:16:21.665 { 00:16:21.665 "name": "BaseBdev2", 00:16:21.665 "uuid": "51bc45ee-20e5-4e7c-8ab4-f6fb852578e7", 00:16:21.665 "is_configured": true, 00:16:21.665 "data_offset": 0, 00:16:21.665 "data_size": 65536 00:16:21.665 }, 00:16:21.665 { 00:16:21.665 "name": "BaseBdev3", 00:16:21.665 "uuid": "48e435b1-f22f-4160-ab81-ecde6f16941e", 00:16:21.665 "is_configured": true, 00:16:21.665 "data_offset": 0, 00:16:21.665 "data_size": 65536 00:16:21.665 }, 00:16:21.665 { 00:16:21.665 "name": "BaseBdev4", 00:16:21.665 "uuid": "9d03eab6-7fc3-462f-973d-a37da3575a0a", 00:16:21.665 "is_configured": true, 00:16:21.665 "data_offset": 0, 00:16:21.665 "data_size": 65536 00:16:21.665 } 00:16:21.665 ] 00:16:21.665 }' 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.665 16:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.925 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.925 [2024-12-12 16:12:48.267206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:22.185 "name": "Existed_Raid", 00:16:22.185 "aliases": [ 00:16:22.185 "8c76ef45-1a99-4b4c-a67e-0aaba99bbd6d" 00:16:22.185 ], 00:16:22.185 "product_name": "Raid Volume", 00:16:22.185 "block_size": 512, 00:16:22.185 "num_blocks": 196608, 00:16:22.185 "uuid": "8c76ef45-1a99-4b4c-a67e-0aaba99bbd6d", 00:16:22.185 "assigned_rate_limits": { 00:16:22.185 "rw_ios_per_sec": 0, 00:16:22.185 "rw_mbytes_per_sec": 0, 00:16:22.185 "r_mbytes_per_sec": 0, 00:16:22.185 "w_mbytes_per_sec": 0 00:16:22.185 }, 00:16:22.185 "claimed": false, 00:16:22.185 "zoned": false, 00:16:22.185 "supported_io_types": { 00:16:22.185 "read": true, 00:16:22.185 "write": true, 00:16:22.185 "unmap": false, 00:16:22.185 "flush": false, 00:16:22.185 "reset": true, 00:16:22.185 "nvme_admin": false, 00:16:22.185 "nvme_io": false, 00:16:22.185 "nvme_io_md": false, 00:16:22.185 "write_zeroes": true, 00:16:22.185 "zcopy": false, 00:16:22.185 "get_zone_info": false, 00:16:22.185 "zone_management": false, 00:16:22.185 "zone_append": false, 00:16:22.185 "compare": false, 00:16:22.185 "compare_and_write": false, 00:16:22.185 "abort": false, 00:16:22.185 "seek_hole": false, 00:16:22.185 "seek_data": false, 00:16:22.185 "copy": false, 00:16:22.185 "nvme_iov_md": false 00:16:22.185 }, 00:16:22.185 "driver_specific": { 00:16:22.185 "raid": { 00:16:22.185 "uuid": "8c76ef45-1a99-4b4c-a67e-0aaba99bbd6d", 00:16:22.185 "strip_size_kb": 64, 00:16:22.185 "state": "online", 00:16:22.185 "raid_level": "raid5f", 00:16:22.185 "superblock": false, 00:16:22.185 "num_base_bdevs": 4, 00:16:22.185 "num_base_bdevs_discovered": 4, 00:16:22.185 "num_base_bdevs_operational": 4, 00:16:22.185 "base_bdevs_list": [ 00:16:22.185 { 00:16:22.185 "name": "BaseBdev1", 00:16:22.185 "uuid": "0740d8a9-99ae-4a16-aa4b-da388c9aa96c", 00:16:22.185 "is_configured": true, 00:16:22.185 "data_offset": 0, 00:16:22.185 "data_size": 65536 00:16:22.185 }, 00:16:22.185 { 00:16:22.185 "name": "BaseBdev2", 00:16:22.185 "uuid": "51bc45ee-20e5-4e7c-8ab4-f6fb852578e7", 00:16:22.185 "is_configured": true, 00:16:22.185 "data_offset": 0, 00:16:22.185 "data_size": 65536 00:16:22.185 }, 00:16:22.185 { 00:16:22.185 "name": "BaseBdev3", 00:16:22.185 "uuid": "48e435b1-f22f-4160-ab81-ecde6f16941e", 00:16:22.185 "is_configured": true, 00:16:22.185 "data_offset": 0, 00:16:22.185 "data_size": 65536 00:16:22.185 }, 00:16:22.185 { 00:16:22.185 "name": "BaseBdev4", 00:16:22.185 "uuid": "9d03eab6-7fc3-462f-973d-a37da3575a0a", 00:16:22.185 "is_configured": true, 00:16:22.185 "data_offset": 0, 00:16:22.185 "data_size": 65536 00:16:22.185 } 00:16:22.185 ] 00:16:22.185 } 00:16:22.185 } 00:16:22.185 }' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:22.185 BaseBdev2 00:16:22.185 BaseBdev3 00:16:22.185 BaseBdev4' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.185 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.445 [2024-12-12 16:12:48.594801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.445 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.446 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.446 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.446 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.446 "name": "Existed_Raid", 00:16:22.446 "uuid": "8c76ef45-1a99-4b4c-a67e-0aaba99bbd6d", 00:16:22.446 "strip_size_kb": 64, 00:16:22.446 "state": "online", 00:16:22.446 "raid_level": "raid5f", 00:16:22.446 "superblock": false, 00:16:22.446 "num_base_bdevs": 4, 00:16:22.446 "num_base_bdevs_discovered": 3, 00:16:22.446 "num_base_bdevs_operational": 3, 00:16:22.446 "base_bdevs_list": [ 00:16:22.446 { 00:16:22.446 "name": null, 00:16:22.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.446 "is_configured": false, 00:16:22.446 "data_offset": 0, 00:16:22.446 "data_size": 65536 00:16:22.446 }, 00:16:22.446 { 00:16:22.446 "name": "BaseBdev2", 00:16:22.446 "uuid": "51bc45ee-20e5-4e7c-8ab4-f6fb852578e7", 00:16:22.446 "is_configured": true, 00:16:22.446 "data_offset": 0, 00:16:22.446 "data_size": 65536 00:16:22.446 }, 00:16:22.446 { 00:16:22.446 "name": "BaseBdev3", 00:16:22.446 "uuid": "48e435b1-f22f-4160-ab81-ecde6f16941e", 00:16:22.446 "is_configured": true, 00:16:22.446 "data_offset": 0, 00:16:22.446 "data_size": 65536 00:16:22.446 }, 00:16:22.446 { 00:16:22.446 "name": "BaseBdev4", 00:16:22.446 "uuid": "9d03eab6-7fc3-462f-973d-a37da3575a0a", 00:16:22.446 "is_configured": true, 00:16:22.446 "data_offset": 0, 00:16:22.446 "data_size": 65536 00:16:22.446 } 00:16:22.446 ] 00:16:22.446 }' 00:16:22.446 16:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.446 16:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.025 [2024-12-12 16:12:49.164153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:23.025 [2024-12-12 16:12:49.164294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.025 [2024-12-12 16:12:49.269288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.025 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.025 [2024-12-12 16:12:49.329197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.311 [2024-12-12 16:12:49.494716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:23.311 [2024-12-12 16:12:49.494876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:23.311 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.312 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.312 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.312 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.576 BaseBdev2 00:16:23.576 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.576 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:23.576 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 [ 00:16:23.577 { 00:16:23.577 "name": "BaseBdev2", 00:16:23.577 "aliases": [ 00:16:23.577 "657bae11-2076-4432-b5d0-a66eaef9c67f" 00:16:23.577 ], 00:16:23.577 "product_name": "Malloc disk", 00:16:23.577 "block_size": 512, 00:16:23.577 "num_blocks": 65536, 00:16:23.577 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:23.577 "assigned_rate_limits": { 00:16:23.577 "rw_ios_per_sec": 0, 00:16:23.577 "rw_mbytes_per_sec": 0, 00:16:23.577 "r_mbytes_per_sec": 0, 00:16:23.577 "w_mbytes_per_sec": 0 00:16:23.577 }, 00:16:23.577 "claimed": false, 00:16:23.577 "zoned": false, 00:16:23.577 "supported_io_types": { 00:16:23.577 "read": true, 00:16:23.577 "write": true, 00:16:23.577 "unmap": true, 00:16:23.577 "flush": true, 00:16:23.577 "reset": true, 00:16:23.577 "nvme_admin": false, 00:16:23.577 "nvme_io": false, 00:16:23.577 "nvme_io_md": false, 00:16:23.577 "write_zeroes": true, 00:16:23.577 "zcopy": true, 00:16:23.577 "get_zone_info": false, 00:16:23.577 "zone_management": false, 00:16:23.577 "zone_append": false, 00:16:23.577 "compare": false, 00:16:23.577 "compare_and_write": false, 00:16:23.577 "abort": true, 00:16:23.577 "seek_hole": false, 00:16:23.577 "seek_data": false, 00:16:23.577 "copy": true, 00:16:23.577 "nvme_iov_md": false 00:16:23.577 }, 00:16:23.577 "memory_domains": [ 00:16:23.577 { 00:16:23.577 "dma_device_id": "system", 00:16:23.577 "dma_device_type": 1 00:16:23.577 }, 00:16:23.577 { 00:16:23.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.577 "dma_device_type": 2 00:16:23.577 } 00:16:23.577 ], 00:16:23.577 "driver_specific": {} 00:16:23.577 } 00:16:23.577 ] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 BaseBdev3 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 [ 00:16:23.577 { 00:16:23.577 "name": "BaseBdev3", 00:16:23.577 "aliases": [ 00:16:23.577 "7e50f878-1bbc-4275-876d-8b703bf721dd" 00:16:23.577 ], 00:16:23.577 "product_name": "Malloc disk", 00:16:23.577 "block_size": 512, 00:16:23.577 "num_blocks": 65536, 00:16:23.577 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:23.577 "assigned_rate_limits": { 00:16:23.577 "rw_ios_per_sec": 0, 00:16:23.577 "rw_mbytes_per_sec": 0, 00:16:23.577 "r_mbytes_per_sec": 0, 00:16:23.577 "w_mbytes_per_sec": 0 00:16:23.577 }, 00:16:23.577 "claimed": false, 00:16:23.577 "zoned": false, 00:16:23.577 "supported_io_types": { 00:16:23.577 "read": true, 00:16:23.577 "write": true, 00:16:23.577 "unmap": true, 00:16:23.577 "flush": true, 00:16:23.577 "reset": true, 00:16:23.577 "nvme_admin": false, 00:16:23.577 "nvme_io": false, 00:16:23.577 "nvme_io_md": false, 00:16:23.577 "write_zeroes": true, 00:16:23.577 "zcopy": true, 00:16:23.577 "get_zone_info": false, 00:16:23.577 "zone_management": false, 00:16:23.577 "zone_append": false, 00:16:23.577 "compare": false, 00:16:23.577 "compare_and_write": false, 00:16:23.577 "abort": true, 00:16:23.577 "seek_hole": false, 00:16:23.577 "seek_data": false, 00:16:23.577 "copy": true, 00:16:23.577 "nvme_iov_md": false 00:16:23.577 }, 00:16:23.577 "memory_domains": [ 00:16:23.577 { 00:16:23.577 "dma_device_id": "system", 00:16:23.577 "dma_device_type": 1 00:16:23.577 }, 00:16:23.577 { 00:16:23.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.577 "dma_device_type": 2 00:16:23.577 } 00:16:23.577 ], 00:16:23.577 "driver_specific": {} 00:16:23.577 } 00:16:23.577 ] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 BaseBdev4 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.577 [ 00:16:23.577 { 00:16:23.577 "name": "BaseBdev4", 00:16:23.577 "aliases": [ 00:16:23.577 "bbfc8f7e-998f-41bb-b4a3-955899690a4e" 00:16:23.577 ], 00:16:23.577 "product_name": "Malloc disk", 00:16:23.577 "block_size": 512, 00:16:23.577 "num_blocks": 65536, 00:16:23.577 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:23.577 "assigned_rate_limits": { 00:16:23.577 "rw_ios_per_sec": 0, 00:16:23.577 "rw_mbytes_per_sec": 0, 00:16:23.577 "r_mbytes_per_sec": 0, 00:16:23.577 "w_mbytes_per_sec": 0 00:16:23.577 }, 00:16:23.577 "claimed": false, 00:16:23.577 "zoned": false, 00:16:23.577 "supported_io_types": { 00:16:23.577 "read": true, 00:16:23.577 "write": true, 00:16:23.577 "unmap": true, 00:16:23.577 "flush": true, 00:16:23.577 "reset": true, 00:16:23.577 "nvme_admin": false, 00:16:23.577 "nvme_io": false, 00:16:23.577 "nvme_io_md": false, 00:16:23.577 "write_zeroes": true, 00:16:23.577 "zcopy": true, 00:16:23.577 "get_zone_info": false, 00:16:23.577 "zone_management": false, 00:16:23.577 "zone_append": false, 00:16:23.577 "compare": false, 00:16:23.577 "compare_and_write": false, 00:16:23.577 "abort": true, 00:16:23.577 "seek_hole": false, 00:16:23.577 "seek_data": false, 00:16:23.577 "copy": true, 00:16:23.577 "nvme_iov_md": false 00:16:23.577 }, 00:16:23.577 "memory_domains": [ 00:16:23.577 { 00:16:23.577 "dma_device_id": "system", 00:16:23.577 "dma_device_type": 1 00:16:23.577 }, 00:16:23.577 { 00:16:23.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.577 "dma_device_type": 2 00:16:23.577 } 00:16:23.577 ], 00:16:23.577 "driver_specific": {} 00:16:23.577 } 00:16:23.577 ] 00:16:23.577 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.578 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:23.578 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:23.578 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.578 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.578 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.578 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.578 [2024-12-12 16:12:49.924943] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.578 [2024-12-12 16:12:49.925090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.578 [2024-12-12 16:12:49.925123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.837 [2024-12-12 16:12:49.927304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.837 [2024-12-12 16:12:49.927367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.837 "name": "Existed_Raid", 00:16:23.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.837 "strip_size_kb": 64, 00:16:23.837 "state": "configuring", 00:16:23.837 "raid_level": "raid5f", 00:16:23.837 "superblock": false, 00:16:23.837 "num_base_bdevs": 4, 00:16:23.837 "num_base_bdevs_discovered": 3, 00:16:23.837 "num_base_bdevs_operational": 4, 00:16:23.837 "base_bdevs_list": [ 00:16:23.837 { 00:16:23.837 "name": "BaseBdev1", 00:16:23.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.837 "is_configured": false, 00:16:23.837 "data_offset": 0, 00:16:23.837 "data_size": 0 00:16:23.837 }, 00:16:23.837 { 00:16:23.837 "name": "BaseBdev2", 00:16:23.837 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:23.837 "is_configured": true, 00:16:23.837 "data_offset": 0, 00:16:23.837 "data_size": 65536 00:16:23.837 }, 00:16:23.837 { 00:16:23.837 "name": "BaseBdev3", 00:16:23.837 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:23.837 "is_configured": true, 00:16:23.837 "data_offset": 0, 00:16:23.837 "data_size": 65536 00:16:23.837 }, 00:16:23.837 { 00:16:23.837 "name": "BaseBdev4", 00:16:23.837 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:23.837 "is_configured": true, 00:16:23.837 "data_offset": 0, 00:16:23.837 "data_size": 65536 00:16:23.837 } 00:16:23.837 ] 00:16:23.837 }' 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.837 16:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.095 [2024-12-12 16:12:50.360173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.095 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.096 "name": "Existed_Raid", 00:16:24.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.096 "strip_size_kb": 64, 00:16:24.096 "state": "configuring", 00:16:24.096 "raid_level": "raid5f", 00:16:24.096 "superblock": false, 00:16:24.096 "num_base_bdevs": 4, 00:16:24.096 "num_base_bdevs_discovered": 2, 00:16:24.096 "num_base_bdevs_operational": 4, 00:16:24.096 "base_bdevs_list": [ 00:16:24.096 { 00:16:24.096 "name": "BaseBdev1", 00:16:24.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.096 "is_configured": false, 00:16:24.096 "data_offset": 0, 00:16:24.096 "data_size": 0 00:16:24.096 }, 00:16:24.096 { 00:16:24.096 "name": null, 00:16:24.096 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:24.096 "is_configured": false, 00:16:24.096 "data_offset": 0, 00:16:24.096 "data_size": 65536 00:16:24.096 }, 00:16:24.096 { 00:16:24.096 "name": "BaseBdev3", 00:16:24.096 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:24.096 "is_configured": true, 00:16:24.096 "data_offset": 0, 00:16:24.096 "data_size": 65536 00:16:24.096 }, 00:16:24.096 { 00:16:24.096 "name": "BaseBdev4", 00:16:24.096 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:24.096 "is_configured": true, 00:16:24.096 "data_offset": 0, 00:16:24.096 "data_size": 65536 00:16:24.096 } 00:16:24.096 ] 00:16:24.096 }' 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.096 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.663 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.663 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.663 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.664 [2024-12-12 16:12:50.851169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.664 BaseBdev1 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.664 [ 00:16:24.664 { 00:16:24.664 "name": "BaseBdev1", 00:16:24.664 "aliases": [ 00:16:24.664 "0d83a45e-4286-45f0-9b4f-988b1fce2c9a" 00:16:24.664 ], 00:16:24.664 "product_name": "Malloc disk", 00:16:24.664 "block_size": 512, 00:16:24.664 "num_blocks": 65536, 00:16:24.664 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:24.664 "assigned_rate_limits": { 00:16:24.664 "rw_ios_per_sec": 0, 00:16:24.664 "rw_mbytes_per_sec": 0, 00:16:24.664 "r_mbytes_per_sec": 0, 00:16:24.664 "w_mbytes_per_sec": 0 00:16:24.664 }, 00:16:24.664 "claimed": true, 00:16:24.664 "claim_type": "exclusive_write", 00:16:24.664 "zoned": false, 00:16:24.664 "supported_io_types": { 00:16:24.664 "read": true, 00:16:24.664 "write": true, 00:16:24.664 "unmap": true, 00:16:24.664 "flush": true, 00:16:24.664 "reset": true, 00:16:24.664 "nvme_admin": false, 00:16:24.664 "nvme_io": false, 00:16:24.664 "nvme_io_md": false, 00:16:24.664 "write_zeroes": true, 00:16:24.664 "zcopy": true, 00:16:24.664 "get_zone_info": false, 00:16:24.664 "zone_management": false, 00:16:24.664 "zone_append": false, 00:16:24.664 "compare": false, 00:16:24.664 "compare_and_write": false, 00:16:24.664 "abort": true, 00:16:24.664 "seek_hole": false, 00:16:24.664 "seek_data": false, 00:16:24.664 "copy": true, 00:16:24.664 "nvme_iov_md": false 00:16:24.664 }, 00:16:24.664 "memory_domains": [ 00:16:24.664 { 00:16:24.664 "dma_device_id": "system", 00:16:24.664 "dma_device_type": 1 00:16:24.664 }, 00:16:24.664 { 00:16:24.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.664 "dma_device_type": 2 00:16:24.664 } 00:16:24.664 ], 00:16:24.664 "driver_specific": {} 00:16:24.664 } 00:16:24.664 ] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.664 "name": "Existed_Raid", 00:16:24.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.664 "strip_size_kb": 64, 00:16:24.664 "state": "configuring", 00:16:24.664 "raid_level": "raid5f", 00:16:24.664 "superblock": false, 00:16:24.664 "num_base_bdevs": 4, 00:16:24.664 "num_base_bdevs_discovered": 3, 00:16:24.664 "num_base_bdevs_operational": 4, 00:16:24.664 "base_bdevs_list": [ 00:16:24.664 { 00:16:24.664 "name": "BaseBdev1", 00:16:24.664 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:24.664 "is_configured": true, 00:16:24.664 "data_offset": 0, 00:16:24.664 "data_size": 65536 00:16:24.664 }, 00:16:24.664 { 00:16:24.664 "name": null, 00:16:24.664 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:24.664 "is_configured": false, 00:16:24.664 "data_offset": 0, 00:16:24.664 "data_size": 65536 00:16:24.664 }, 00:16:24.664 { 00:16:24.664 "name": "BaseBdev3", 00:16:24.664 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:24.664 "is_configured": true, 00:16:24.664 "data_offset": 0, 00:16:24.664 "data_size": 65536 00:16:24.664 }, 00:16:24.664 { 00:16:24.664 "name": "BaseBdev4", 00:16:24.664 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:24.664 "is_configured": true, 00:16:24.664 "data_offset": 0, 00:16:24.664 "data_size": 65536 00:16:24.664 } 00:16:24.664 ] 00:16:24.664 }' 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.664 16:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.233 [2024-12-12 16:12:51.378362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:25.233 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.234 "name": "Existed_Raid", 00:16:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.234 "strip_size_kb": 64, 00:16:25.234 "state": "configuring", 00:16:25.234 "raid_level": "raid5f", 00:16:25.234 "superblock": false, 00:16:25.234 "num_base_bdevs": 4, 00:16:25.234 "num_base_bdevs_discovered": 2, 00:16:25.234 "num_base_bdevs_operational": 4, 00:16:25.234 "base_bdevs_list": [ 00:16:25.234 { 00:16:25.234 "name": "BaseBdev1", 00:16:25.234 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:25.234 "is_configured": true, 00:16:25.234 "data_offset": 0, 00:16:25.234 "data_size": 65536 00:16:25.234 }, 00:16:25.234 { 00:16:25.234 "name": null, 00:16:25.234 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:25.234 "is_configured": false, 00:16:25.234 "data_offset": 0, 00:16:25.234 "data_size": 65536 00:16:25.234 }, 00:16:25.234 { 00:16:25.234 "name": null, 00:16:25.234 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:25.234 "is_configured": false, 00:16:25.234 "data_offset": 0, 00:16:25.234 "data_size": 65536 00:16:25.234 }, 00:16:25.234 { 00:16:25.234 "name": "BaseBdev4", 00:16:25.234 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:25.234 "is_configured": true, 00:16:25.234 "data_offset": 0, 00:16:25.234 "data_size": 65536 00:16:25.234 } 00:16:25.234 ] 00:16:25.234 }' 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.234 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.492 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.492 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.492 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.492 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.492 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.752 [2024-12-12 16:12:51.873497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.752 "name": "Existed_Raid", 00:16:25.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.752 "strip_size_kb": 64, 00:16:25.752 "state": "configuring", 00:16:25.752 "raid_level": "raid5f", 00:16:25.752 "superblock": false, 00:16:25.752 "num_base_bdevs": 4, 00:16:25.752 "num_base_bdevs_discovered": 3, 00:16:25.752 "num_base_bdevs_operational": 4, 00:16:25.752 "base_bdevs_list": [ 00:16:25.752 { 00:16:25.752 "name": "BaseBdev1", 00:16:25.752 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:25.752 "is_configured": true, 00:16:25.752 "data_offset": 0, 00:16:25.752 "data_size": 65536 00:16:25.752 }, 00:16:25.752 { 00:16:25.752 "name": null, 00:16:25.752 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:25.752 "is_configured": false, 00:16:25.752 "data_offset": 0, 00:16:25.752 "data_size": 65536 00:16:25.752 }, 00:16:25.752 { 00:16:25.752 "name": "BaseBdev3", 00:16:25.752 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:25.752 "is_configured": true, 00:16:25.752 "data_offset": 0, 00:16:25.752 "data_size": 65536 00:16:25.752 }, 00:16:25.752 { 00:16:25.752 "name": "BaseBdev4", 00:16:25.752 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:25.752 "is_configured": true, 00:16:25.752 "data_offset": 0, 00:16:25.752 "data_size": 65536 00:16:25.752 } 00:16:25.752 ] 00:16:25.752 }' 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.752 16:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.321 [2024-12-12 16:12:52.420669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.321 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.321 "name": "Existed_Raid", 00:16:26.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.321 "strip_size_kb": 64, 00:16:26.321 "state": "configuring", 00:16:26.321 "raid_level": "raid5f", 00:16:26.321 "superblock": false, 00:16:26.321 "num_base_bdevs": 4, 00:16:26.321 "num_base_bdevs_discovered": 2, 00:16:26.321 "num_base_bdevs_operational": 4, 00:16:26.321 "base_bdevs_list": [ 00:16:26.321 { 00:16:26.321 "name": null, 00:16:26.321 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:26.321 "is_configured": false, 00:16:26.321 "data_offset": 0, 00:16:26.321 "data_size": 65536 00:16:26.322 }, 00:16:26.322 { 00:16:26.322 "name": null, 00:16:26.322 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:26.322 "is_configured": false, 00:16:26.322 "data_offset": 0, 00:16:26.322 "data_size": 65536 00:16:26.322 }, 00:16:26.322 { 00:16:26.322 "name": "BaseBdev3", 00:16:26.322 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:26.322 "is_configured": true, 00:16:26.322 "data_offset": 0, 00:16:26.322 "data_size": 65536 00:16:26.322 }, 00:16:26.322 { 00:16:26.322 "name": "BaseBdev4", 00:16:26.322 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:26.322 "is_configured": true, 00:16:26.322 "data_offset": 0, 00:16:26.322 "data_size": 65536 00:16:26.322 } 00:16:26.322 ] 00:16:26.322 }' 00:16:26.322 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.322 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.889 [2024-12-12 16:12:52.986693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.889 16:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.889 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.889 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.889 "name": "Existed_Raid", 00:16:26.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.889 "strip_size_kb": 64, 00:16:26.889 "state": "configuring", 00:16:26.889 "raid_level": "raid5f", 00:16:26.889 "superblock": false, 00:16:26.889 "num_base_bdevs": 4, 00:16:26.889 "num_base_bdevs_discovered": 3, 00:16:26.889 "num_base_bdevs_operational": 4, 00:16:26.889 "base_bdevs_list": [ 00:16:26.889 { 00:16:26.889 "name": null, 00:16:26.889 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:26.889 "is_configured": false, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 }, 00:16:26.889 { 00:16:26.889 "name": "BaseBdev2", 00:16:26.889 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:26.889 "is_configured": true, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 }, 00:16:26.889 { 00:16:26.889 "name": "BaseBdev3", 00:16:26.889 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:26.889 "is_configured": true, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 }, 00:16:26.889 { 00:16:26.889 "name": "BaseBdev4", 00:16:26.889 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:26.889 "is_configured": true, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 } 00:16:26.889 ] 00:16:26.889 }' 00:16:26.889 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.889 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0d83a45e-4286-45f0-9b4f-988b1fce2c9a 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.149 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.408 [2024-12-12 16:12:53.533077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:27.408 [2024-12-12 16:12:53.533148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:27.408 [2024-12-12 16:12:53.533157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:27.408 [2024-12-12 16:12:53.533457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:27.408 [2024-12-12 16:12:53.541103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:27.408 [2024-12-12 16:12:53.541134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:27.408 [2024-12-12 16:12:53.541453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.408 NewBaseBdev 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.408 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.408 [ 00:16:27.408 { 00:16:27.408 "name": "NewBaseBdev", 00:16:27.408 "aliases": [ 00:16:27.408 "0d83a45e-4286-45f0-9b4f-988b1fce2c9a" 00:16:27.408 ], 00:16:27.408 "product_name": "Malloc disk", 00:16:27.408 "block_size": 512, 00:16:27.408 "num_blocks": 65536, 00:16:27.408 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:27.408 "assigned_rate_limits": { 00:16:27.408 "rw_ios_per_sec": 0, 00:16:27.408 "rw_mbytes_per_sec": 0, 00:16:27.408 "r_mbytes_per_sec": 0, 00:16:27.408 "w_mbytes_per_sec": 0 00:16:27.408 }, 00:16:27.408 "claimed": true, 00:16:27.408 "claim_type": "exclusive_write", 00:16:27.408 "zoned": false, 00:16:27.408 "supported_io_types": { 00:16:27.408 "read": true, 00:16:27.408 "write": true, 00:16:27.408 "unmap": true, 00:16:27.408 "flush": true, 00:16:27.408 "reset": true, 00:16:27.408 "nvme_admin": false, 00:16:27.408 "nvme_io": false, 00:16:27.409 "nvme_io_md": false, 00:16:27.409 "write_zeroes": true, 00:16:27.409 "zcopy": true, 00:16:27.409 "get_zone_info": false, 00:16:27.409 "zone_management": false, 00:16:27.409 "zone_append": false, 00:16:27.409 "compare": false, 00:16:27.409 "compare_and_write": false, 00:16:27.409 "abort": true, 00:16:27.409 "seek_hole": false, 00:16:27.409 "seek_data": false, 00:16:27.409 "copy": true, 00:16:27.409 "nvme_iov_md": false 00:16:27.409 }, 00:16:27.409 "memory_domains": [ 00:16:27.409 { 00:16:27.409 "dma_device_id": "system", 00:16:27.409 "dma_device_type": 1 00:16:27.409 }, 00:16:27.409 { 00:16:27.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.409 "dma_device_type": 2 00:16:27.409 } 00:16:27.409 ], 00:16:27.409 "driver_specific": {} 00:16:27.409 } 00:16:27.409 ] 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.409 "name": "Existed_Raid", 00:16:27.409 "uuid": "b0d76d18-5dca-42bb-aaba-5ec78f2a6270", 00:16:27.409 "strip_size_kb": 64, 00:16:27.409 "state": "online", 00:16:27.409 "raid_level": "raid5f", 00:16:27.409 "superblock": false, 00:16:27.409 "num_base_bdevs": 4, 00:16:27.409 "num_base_bdevs_discovered": 4, 00:16:27.409 "num_base_bdevs_operational": 4, 00:16:27.409 "base_bdevs_list": [ 00:16:27.409 { 00:16:27.409 "name": "NewBaseBdev", 00:16:27.409 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:27.409 "is_configured": true, 00:16:27.409 "data_offset": 0, 00:16:27.409 "data_size": 65536 00:16:27.409 }, 00:16:27.409 { 00:16:27.409 "name": "BaseBdev2", 00:16:27.409 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:27.409 "is_configured": true, 00:16:27.409 "data_offset": 0, 00:16:27.409 "data_size": 65536 00:16:27.409 }, 00:16:27.409 { 00:16:27.409 "name": "BaseBdev3", 00:16:27.409 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:27.409 "is_configured": true, 00:16:27.409 "data_offset": 0, 00:16:27.409 "data_size": 65536 00:16:27.409 }, 00:16:27.409 { 00:16:27.409 "name": "BaseBdev4", 00:16:27.409 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:27.409 "is_configured": true, 00:16:27.409 "data_offset": 0, 00:16:27.409 "data_size": 65536 00:16:27.409 } 00:16:27.409 ] 00:16:27.409 }' 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.409 16:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.977 [2024-12-12 16:12:54.042291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.977 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.977 "name": "Existed_Raid", 00:16:27.977 "aliases": [ 00:16:27.977 "b0d76d18-5dca-42bb-aaba-5ec78f2a6270" 00:16:27.977 ], 00:16:27.977 "product_name": "Raid Volume", 00:16:27.977 "block_size": 512, 00:16:27.977 "num_blocks": 196608, 00:16:27.977 "uuid": "b0d76d18-5dca-42bb-aaba-5ec78f2a6270", 00:16:27.977 "assigned_rate_limits": { 00:16:27.977 "rw_ios_per_sec": 0, 00:16:27.977 "rw_mbytes_per_sec": 0, 00:16:27.977 "r_mbytes_per_sec": 0, 00:16:27.977 "w_mbytes_per_sec": 0 00:16:27.977 }, 00:16:27.977 "claimed": false, 00:16:27.977 "zoned": false, 00:16:27.977 "supported_io_types": { 00:16:27.977 "read": true, 00:16:27.977 "write": true, 00:16:27.977 "unmap": false, 00:16:27.977 "flush": false, 00:16:27.977 "reset": true, 00:16:27.977 "nvme_admin": false, 00:16:27.977 "nvme_io": false, 00:16:27.977 "nvme_io_md": false, 00:16:27.977 "write_zeroes": true, 00:16:27.977 "zcopy": false, 00:16:27.977 "get_zone_info": false, 00:16:27.977 "zone_management": false, 00:16:27.977 "zone_append": false, 00:16:27.977 "compare": false, 00:16:27.977 "compare_and_write": false, 00:16:27.977 "abort": false, 00:16:27.977 "seek_hole": false, 00:16:27.977 "seek_data": false, 00:16:27.977 "copy": false, 00:16:27.977 "nvme_iov_md": false 00:16:27.977 }, 00:16:27.977 "driver_specific": { 00:16:27.977 "raid": { 00:16:27.977 "uuid": "b0d76d18-5dca-42bb-aaba-5ec78f2a6270", 00:16:27.977 "strip_size_kb": 64, 00:16:27.977 "state": "online", 00:16:27.977 "raid_level": "raid5f", 00:16:27.977 "superblock": false, 00:16:27.977 "num_base_bdevs": 4, 00:16:27.978 "num_base_bdevs_discovered": 4, 00:16:27.978 "num_base_bdevs_operational": 4, 00:16:27.978 "base_bdevs_list": [ 00:16:27.978 { 00:16:27.978 "name": "NewBaseBdev", 00:16:27.978 "uuid": "0d83a45e-4286-45f0-9b4f-988b1fce2c9a", 00:16:27.978 "is_configured": true, 00:16:27.978 "data_offset": 0, 00:16:27.978 "data_size": 65536 00:16:27.978 }, 00:16:27.978 { 00:16:27.978 "name": "BaseBdev2", 00:16:27.978 "uuid": "657bae11-2076-4432-b5d0-a66eaef9c67f", 00:16:27.978 "is_configured": true, 00:16:27.978 "data_offset": 0, 00:16:27.978 "data_size": 65536 00:16:27.978 }, 00:16:27.978 { 00:16:27.978 "name": "BaseBdev3", 00:16:27.978 "uuid": "7e50f878-1bbc-4275-876d-8b703bf721dd", 00:16:27.978 "is_configured": true, 00:16:27.978 "data_offset": 0, 00:16:27.978 "data_size": 65536 00:16:27.978 }, 00:16:27.978 { 00:16:27.978 "name": "BaseBdev4", 00:16:27.978 "uuid": "bbfc8f7e-998f-41bb-b4a3-955899690a4e", 00:16:27.978 "is_configured": true, 00:16:27.978 "data_offset": 0, 00:16:27.978 "data_size": 65536 00:16:27.978 } 00:16:27.978 ] 00:16:27.978 } 00:16:27.978 } 00:16:27.978 }' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:27.978 BaseBdev2 00:16:27.978 BaseBdev3 00:16:27.978 BaseBdev4' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.978 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.237 [2024-12-12 16:12:54.373543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.237 [2024-12-12 16:12:54.373590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.237 [2024-12-12 16:12:54.373679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.237 [2024-12-12 16:12:54.374024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.237 [2024-12-12 16:12:54.374041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84859 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84859 ']' 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84859 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84859 00:16:28.237 killing process with pid 84859 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84859' 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 84859 00:16:28.237 [2024-12-12 16:12:54.421737] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.237 16:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 84859 00:16:28.806 [2024-12-12 16:12:54.864376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.779 ************************************ 00:16:29.779 END TEST raid5f_state_function_test 00:16:29.779 ************************************ 00:16:29.779 16:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.779 00:16:29.779 real 0m11.869s 00:16:29.779 user 0m18.445s 00:16:29.779 sys 0m2.293s 00:16:29.779 16:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.779 16:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.038 16:12:56 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:30.038 16:12:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:30.038 16:12:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.038 16:12:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.038 ************************************ 00:16:30.038 START TEST raid5f_state_function_test_sb 00:16:30.038 ************************************ 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.038 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=85530 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85530' 00:16:30.039 Process raid pid: 85530 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 85530 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85530 ']' 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.039 16:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.039 [2024-12-12 16:12:56.297622] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:30.039 [2024-12-12 16:12:56.297800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.299 [2024-12-12 16:12:56.450608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.299 [2024-12-12 16:12:56.587320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.558 [2024-12-12 16:12:56.829900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.558 [2024-12-12 16:12:56.829952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.818 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.818 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:30.818 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:30.818 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.818 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.818 [2024-12-12 16:12:57.134962] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.818 [2024-12-12 16:12:57.135137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.818 [2024-12-12 16:12:57.135154] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.818 [2024-12-12 16:12:57.135167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.818 [2024-12-12 16:12:57.135176] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.818 [2024-12-12 16:12:57.135188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.819 [2024-12-12 16:12:57.135196] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:30.819 [2024-12-12 16:12:57.135208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.819 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.078 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.078 "name": "Existed_Raid", 00:16:31.078 "uuid": "8428a140-e6f6-4e44-80c9-3115ed9d4c94", 00:16:31.078 "strip_size_kb": 64, 00:16:31.078 "state": "configuring", 00:16:31.078 "raid_level": "raid5f", 00:16:31.078 "superblock": true, 00:16:31.078 "num_base_bdevs": 4, 00:16:31.078 "num_base_bdevs_discovered": 0, 00:16:31.078 "num_base_bdevs_operational": 4, 00:16:31.078 "base_bdevs_list": [ 00:16:31.078 { 00:16:31.078 "name": "BaseBdev1", 00:16:31.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.078 "is_configured": false, 00:16:31.078 "data_offset": 0, 00:16:31.078 "data_size": 0 00:16:31.078 }, 00:16:31.078 { 00:16:31.078 "name": "BaseBdev2", 00:16:31.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.078 "is_configured": false, 00:16:31.078 "data_offset": 0, 00:16:31.078 "data_size": 0 00:16:31.078 }, 00:16:31.078 { 00:16:31.078 "name": "BaseBdev3", 00:16:31.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.078 "is_configured": false, 00:16:31.078 "data_offset": 0, 00:16:31.078 "data_size": 0 00:16:31.078 }, 00:16:31.078 { 00:16:31.078 "name": "BaseBdev4", 00:16:31.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.078 "is_configured": false, 00:16:31.078 "data_offset": 0, 00:16:31.078 "data_size": 0 00:16:31.078 } 00:16:31.078 ] 00:16:31.078 }' 00:16:31.078 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.078 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.338 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:31.338 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.338 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.339 [2024-12-12 16:12:57.578104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.339 [2024-12-12 16:12:57.578245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.339 [2024-12-12 16:12:57.590094] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.339 [2024-12-12 16:12:57.590203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.339 [2024-12-12 16:12:57.590236] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.339 [2024-12-12 16:12:57.590266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.339 [2024-12-12 16:12:57.590289] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.339 [2024-12-12 16:12:57.590317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.339 [2024-12-12 16:12:57.590339] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:31.339 [2024-12-12 16:12:57.590376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.339 [2024-12-12 16:12:57.647021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.339 BaseBdev1 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.339 [ 00:16:31.339 { 00:16:31.339 "name": "BaseBdev1", 00:16:31.339 "aliases": [ 00:16:31.339 "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f" 00:16:31.339 ], 00:16:31.339 "product_name": "Malloc disk", 00:16:31.339 "block_size": 512, 00:16:31.339 "num_blocks": 65536, 00:16:31.339 "uuid": "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f", 00:16:31.339 "assigned_rate_limits": { 00:16:31.339 "rw_ios_per_sec": 0, 00:16:31.339 "rw_mbytes_per_sec": 0, 00:16:31.339 "r_mbytes_per_sec": 0, 00:16:31.339 "w_mbytes_per_sec": 0 00:16:31.339 }, 00:16:31.339 "claimed": true, 00:16:31.339 "claim_type": "exclusive_write", 00:16:31.339 "zoned": false, 00:16:31.339 "supported_io_types": { 00:16:31.339 "read": true, 00:16:31.339 "write": true, 00:16:31.339 "unmap": true, 00:16:31.339 "flush": true, 00:16:31.339 "reset": true, 00:16:31.339 "nvme_admin": false, 00:16:31.339 "nvme_io": false, 00:16:31.339 "nvme_io_md": false, 00:16:31.339 "write_zeroes": true, 00:16:31.339 "zcopy": true, 00:16:31.339 "get_zone_info": false, 00:16:31.339 "zone_management": false, 00:16:31.339 "zone_append": false, 00:16:31.339 "compare": false, 00:16:31.339 "compare_and_write": false, 00:16:31.339 "abort": true, 00:16:31.339 "seek_hole": false, 00:16:31.339 "seek_data": false, 00:16:31.339 "copy": true, 00:16:31.339 "nvme_iov_md": false 00:16:31.339 }, 00:16:31.339 "memory_domains": [ 00:16:31.339 { 00:16:31.339 "dma_device_id": "system", 00:16:31.339 "dma_device_type": 1 00:16:31.339 }, 00:16:31.339 { 00:16:31.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.339 "dma_device_type": 2 00:16:31.339 } 00:16:31.339 ], 00:16:31.339 "driver_specific": {} 00:16:31.339 } 00:16:31.339 ] 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.339 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.599 "name": "Existed_Raid", 00:16:31.599 "uuid": "748e7e58-3af9-4e81-b8dd-dba5d4b9ab0f", 00:16:31.599 "strip_size_kb": 64, 00:16:31.599 "state": "configuring", 00:16:31.599 "raid_level": "raid5f", 00:16:31.599 "superblock": true, 00:16:31.599 "num_base_bdevs": 4, 00:16:31.599 "num_base_bdevs_discovered": 1, 00:16:31.599 "num_base_bdevs_operational": 4, 00:16:31.599 "base_bdevs_list": [ 00:16:31.599 { 00:16:31.599 "name": "BaseBdev1", 00:16:31.599 "uuid": "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f", 00:16:31.599 "is_configured": true, 00:16:31.599 "data_offset": 2048, 00:16:31.599 "data_size": 63488 00:16:31.599 }, 00:16:31.599 { 00:16:31.599 "name": "BaseBdev2", 00:16:31.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.599 "is_configured": false, 00:16:31.599 "data_offset": 0, 00:16:31.599 "data_size": 0 00:16:31.599 }, 00:16:31.599 { 00:16:31.599 "name": "BaseBdev3", 00:16:31.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.599 "is_configured": false, 00:16:31.599 "data_offset": 0, 00:16:31.599 "data_size": 0 00:16:31.599 }, 00:16:31.599 { 00:16:31.599 "name": "BaseBdev4", 00:16:31.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.599 "is_configured": false, 00:16:31.599 "data_offset": 0, 00:16:31.599 "data_size": 0 00:16:31.599 } 00:16:31.599 ] 00:16:31.599 }' 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.599 16:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.860 [2024-12-12 16:12:58.162465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.860 [2024-12-12 16:12:58.162527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.860 [2024-12-12 16:12:58.174515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.860 [2024-12-12 16:12:58.176676] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.860 [2024-12-12 16:12:58.176731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.860 [2024-12-12 16:12:58.176743] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.860 [2024-12-12 16:12:58.176757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.860 [2024-12-12 16:12:58.176765] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:31.860 [2024-12-12 16:12:58.176777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.860 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.120 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.120 "name": "Existed_Raid", 00:16:32.120 "uuid": "869440a1-1876-452b-b2c6-2c06ad3488b6", 00:16:32.120 "strip_size_kb": 64, 00:16:32.120 "state": "configuring", 00:16:32.120 "raid_level": "raid5f", 00:16:32.120 "superblock": true, 00:16:32.120 "num_base_bdevs": 4, 00:16:32.120 "num_base_bdevs_discovered": 1, 00:16:32.120 "num_base_bdevs_operational": 4, 00:16:32.121 "base_bdevs_list": [ 00:16:32.121 { 00:16:32.121 "name": "BaseBdev1", 00:16:32.121 "uuid": "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f", 00:16:32.121 "is_configured": true, 00:16:32.121 "data_offset": 2048, 00:16:32.121 "data_size": 63488 00:16:32.121 }, 00:16:32.121 { 00:16:32.121 "name": "BaseBdev2", 00:16:32.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.121 "is_configured": false, 00:16:32.121 "data_offset": 0, 00:16:32.121 "data_size": 0 00:16:32.121 }, 00:16:32.121 { 00:16:32.121 "name": "BaseBdev3", 00:16:32.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.121 "is_configured": false, 00:16:32.121 "data_offset": 0, 00:16:32.121 "data_size": 0 00:16:32.121 }, 00:16:32.121 { 00:16:32.121 "name": "BaseBdev4", 00:16:32.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.121 "is_configured": false, 00:16:32.121 "data_offset": 0, 00:16:32.121 "data_size": 0 00:16:32.121 } 00:16:32.121 ] 00:16:32.121 }' 00:16:32.121 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.121 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.380 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.381 [2024-12-12 16:12:58.643323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.381 BaseBdev2 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.381 [ 00:16:32.381 { 00:16:32.381 "name": "BaseBdev2", 00:16:32.381 "aliases": [ 00:16:32.381 "a3c84463-41d3-45fa-8896-81a49950bafc" 00:16:32.381 ], 00:16:32.381 "product_name": "Malloc disk", 00:16:32.381 "block_size": 512, 00:16:32.381 "num_blocks": 65536, 00:16:32.381 "uuid": "a3c84463-41d3-45fa-8896-81a49950bafc", 00:16:32.381 "assigned_rate_limits": { 00:16:32.381 "rw_ios_per_sec": 0, 00:16:32.381 "rw_mbytes_per_sec": 0, 00:16:32.381 "r_mbytes_per_sec": 0, 00:16:32.381 "w_mbytes_per_sec": 0 00:16:32.381 }, 00:16:32.381 "claimed": true, 00:16:32.381 "claim_type": "exclusive_write", 00:16:32.381 "zoned": false, 00:16:32.381 "supported_io_types": { 00:16:32.381 "read": true, 00:16:32.381 "write": true, 00:16:32.381 "unmap": true, 00:16:32.381 "flush": true, 00:16:32.381 "reset": true, 00:16:32.381 "nvme_admin": false, 00:16:32.381 "nvme_io": false, 00:16:32.381 "nvme_io_md": false, 00:16:32.381 "write_zeroes": true, 00:16:32.381 "zcopy": true, 00:16:32.381 "get_zone_info": false, 00:16:32.381 "zone_management": false, 00:16:32.381 "zone_append": false, 00:16:32.381 "compare": false, 00:16:32.381 "compare_and_write": false, 00:16:32.381 "abort": true, 00:16:32.381 "seek_hole": false, 00:16:32.381 "seek_data": false, 00:16:32.381 "copy": true, 00:16:32.381 "nvme_iov_md": false 00:16:32.381 }, 00:16:32.381 "memory_domains": [ 00:16:32.381 { 00:16:32.381 "dma_device_id": "system", 00:16:32.381 "dma_device_type": 1 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.381 "dma_device_type": 2 00:16:32.381 } 00:16:32.381 ], 00:16:32.381 "driver_specific": {} 00:16:32.381 } 00:16:32.381 ] 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.381 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.641 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.641 "name": "Existed_Raid", 00:16:32.641 "uuid": "869440a1-1876-452b-b2c6-2c06ad3488b6", 00:16:32.641 "strip_size_kb": 64, 00:16:32.641 "state": "configuring", 00:16:32.641 "raid_level": "raid5f", 00:16:32.641 "superblock": true, 00:16:32.641 "num_base_bdevs": 4, 00:16:32.641 "num_base_bdevs_discovered": 2, 00:16:32.641 "num_base_bdevs_operational": 4, 00:16:32.641 "base_bdevs_list": [ 00:16:32.641 { 00:16:32.641 "name": "BaseBdev1", 00:16:32.641 "uuid": "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f", 00:16:32.641 "is_configured": true, 00:16:32.641 "data_offset": 2048, 00:16:32.641 "data_size": 63488 00:16:32.641 }, 00:16:32.641 { 00:16:32.641 "name": "BaseBdev2", 00:16:32.641 "uuid": "a3c84463-41d3-45fa-8896-81a49950bafc", 00:16:32.641 "is_configured": true, 00:16:32.641 "data_offset": 2048, 00:16:32.641 "data_size": 63488 00:16:32.641 }, 00:16:32.641 { 00:16:32.641 "name": "BaseBdev3", 00:16:32.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.641 "is_configured": false, 00:16:32.641 "data_offset": 0, 00:16:32.641 "data_size": 0 00:16:32.641 }, 00:16:32.641 { 00:16:32.641 "name": "BaseBdev4", 00:16:32.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.641 "is_configured": false, 00:16:32.641 "data_offset": 0, 00:16:32.641 "data_size": 0 00:16:32.641 } 00:16:32.641 ] 00:16:32.641 }' 00:16:32.641 16:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.641 16:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.901 [2024-12-12 16:12:59.208624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.901 BaseBdev3 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.901 [ 00:16:32.901 { 00:16:32.901 "name": "BaseBdev3", 00:16:32.901 "aliases": [ 00:16:32.901 "7ba396ea-dd1b-4dc7-bc8e-d516579338b3" 00:16:32.901 ], 00:16:32.901 "product_name": "Malloc disk", 00:16:32.901 "block_size": 512, 00:16:32.901 "num_blocks": 65536, 00:16:32.901 "uuid": "7ba396ea-dd1b-4dc7-bc8e-d516579338b3", 00:16:32.901 "assigned_rate_limits": { 00:16:32.901 "rw_ios_per_sec": 0, 00:16:32.901 "rw_mbytes_per_sec": 0, 00:16:32.901 "r_mbytes_per_sec": 0, 00:16:32.901 "w_mbytes_per_sec": 0 00:16:32.901 }, 00:16:32.901 "claimed": true, 00:16:32.901 "claim_type": "exclusive_write", 00:16:32.901 "zoned": false, 00:16:32.901 "supported_io_types": { 00:16:32.901 "read": true, 00:16:32.901 "write": true, 00:16:32.901 "unmap": true, 00:16:32.901 "flush": true, 00:16:32.901 "reset": true, 00:16:32.901 "nvme_admin": false, 00:16:32.901 "nvme_io": false, 00:16:32.901 "nvme_io_md": false, 00:16:32.901 "write_zeroes": true, 00:16:32.901 "zcopy": true, 00:16:32.901 "get_zone_info": false, 00:16:32.901 "zone_management": false, 00:16:32.901 "zone_append": false, 00:16:32.901 "compare": false, 00:16:32.901 "compare_and_write": false, 00:16:32.901 "abort": true, 00:16:32.901 "seek_hole": false, 00:16:32.901 "seek_data": false, 00:16:32.901 "copy": true, 00:16:32.901 "nvme_iov_md": false 00:16:32.901 }, 00:16:32.901 "memory_domains": [ 00:16:32.901 { 00:16:32.901 "dma_device_id": "system", 00:16:32.901 "dma_device_type": 1 00:16:32.901 }, 00:16:32.901 { 00:16:32.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.901 "dma_device_type": 2 00:16:32.901 } 00:16:32.901 ], 00:16:32.901 "driver_specific": {} 00:16:32.901 } 00:16:32.901 ] 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.901 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.162 "name": "Existed_Raid", 00:16:33.162 "uuid": "869440a1-1876-452b-b2c6-2c06ad3488b6", 00:16:33.162 "strip_size_kb": 64, 00:16:33.162 "state": "configuring", 00:16:33.162 "raid_level": "raid5f", 00:16:33.162 "superblock": true, 00:16:33.162 "num_base_bdevs": 4, 00:16:33.162 "num_base_bdevs_discovered": 3, 00:16:33.162 "num_base_bdevs_operational": 4, 00:16:33.162 "base_bdevs_list": [ 00:16:33.162 { 00:16:33.162 "name": "BaseBdev1", 00:16:33.162 "uuid": "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f", 00:16:33.162 "is_configured": true, 00:16:33.162 "data_offset": 2048, 00:16:33.162 "data_size": 63488 00:16:33.162 }, 00:16:33.162 { 00:16:33.162 "name": "BaseBdev2", 00:16:33.162 "uuid": "a3c84463-41d3-45fa-8896-81a49950bafc", 00:16:33.162 "is_configured": true, 00:16:33.162 "data_offset": 2048, 00:16:33.162 "data_size": 63488 00:16:33.162 }, 00:16:33.162 { 00:16:33.162 "name": "BaseBdev3", 00:16:33.162 "uuid": "7ba396ea-dd1b-4dc7-bc8e-d516579338b3", 00:16:33.162 "is_configured": true, 00:16:33.162 "data_offset": 2048, 00:16:33.162 "data_size": 63488 00:16:33.162 }, 00:16:33.162 { 00:16:33.162 "name": "BaseBdev4", 00:16:33.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.162 "is_configured": false, 00:16:33.162 "data_offset": 0, 00:16:33.162 "data_size": 0 00:16:33.162 } 00:16:33.162 ] 00:16:33.162 }' 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.162 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.422 [2024-12-12 16:12:59.747271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:33.422 [2024-12-12 16:12:59.747740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:33.422 [2024-12-12 16:12:59.747808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:33.422 [2024-12-12 16:12:59.748153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:33.422 BaseBdev4 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.422 [2024-12-12 16:12:59.755712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:33.422 [2024-12-12 16:12:59.755742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:33.422 [2024-12-12 16:12:59.756066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.422 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.683 [ 00:16:33.683 { 00:16:33.683 "name": "BaseBdev4", 00:16:33.683 "aliases": [ 00:16:33.683 "71d0ace6-28d5-465c-bf5f-9c9f36f292d4" 00:16:33.683 ], 00:16:33.683 "product_name": "Malloc disk", 00:16:33.683 "block_size": 512, 00:16:33.683 "num_blocks": 65536, 00:16:33.683 "uuid": "71d0ace6-28d5-465c-bf5f-9c9f36f292d4", 00:16:33.683 "assigned_rate_limits": { 00:16:33.683 "rw_ios_per_sec": 0, 00:16:33.683 "rw_mbytes_per_sec": 0, 00:16:33.683 "r_mbytes_per_sec": 0, 00:16:33.683 "w_mbytes_per_sec": 0 00:16:33.683 }, 00:16:33.683 "claimed": true, 00:16:33.683 "claim_type": "exclusive_write", 00:16:33.683 "zoned": false, 00:16:33.683 "supported_io_types": { 00:16:33.683 "read": true, 00:16:33.683 "write": true, 00:16:33.683 "unmap": true, 00:16:33.683 "flush": true, 00:16:33.683 "reset": true, 00:16:33.683 "nvme_admin": false, 00:16:33.683 "nvme_io": false, 00:16:33.683 "nvme_io_md": false, 00:16:33.683 "write_zeroes": true, 00:16:33.683 "zcopy": true, 00:16:33.683 "get_zone_info": false, 00:16:33.683 "zone_management": false, 00:16:33.683 "zone_append": false, 00:16:33.683 "compare": false, 00:16:33.683 "compare_and_write": false, 00:16:33.683 "abort": true, 00:16:33.683 "seek_hole": false, 00:16:33.683 "seek_data": false, 00:16:33.683 "copy": true, 00:16:33.683 "nvme_iov_md": false 00:16:33.683 }, 00:16:33.683 "memory_domains": [ 00:16:33.683 { 00:16:33.683 "dma_device_id": "system", 00:16:33.683 "dma_device_type": 1 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.683 "dma_device_type": 2 00:16:33.683 } 00:16:33.683 ], 00:16:33.683 "driver_specific": {} 00:16:33.683 } 00:16:33.683 ] 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.683 "name": "Existed_Raid", 00:16:33.683 "uuid": "869440a1-1876-452b-b2c6-2c06ad3488b6", 00:16:33.683 "strip_size_kb": 64, 00:16:33.683 "state": "online", 00:16:33.683 "raid_level": "raid5f", 00:16:33.683 "superblock": true, 00:16:33.683 "num_base_bdevs": 4, 00:16:33.683 "num_base_bdevs_discovered": 4, 00:16:33.683 "num_base_bdevs_operational": 4, 00:16:33.683 "base_bdevs_list": [ 00:16:33.683 { 00:16:33.683 "name": "BaseBdev1", 00:16:33.683 "uuid": "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f", 00:16:33.683 "is_configured": true, 00:16:33.683 "data_offset": 2048, 00:16:33.683 "data_size": 63488 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "name": "BaseBdev2", 00:16:33.683 "uuid": "a3c84463-41d3-45fa-8896-81a49950bafc", 00:16:33.683 "is_configured": true, 00:16:33.683 "data_offset": 2048, 00:16:33.683 "data_size": 63488 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "name": "BaseBdev3", 00:16:33.683 "uuid": "7ba396ea-dd1b-4dc7-bc8e-d516579338b3", 00:16:33.683 "is_configured": true, 00:16:33.683 "data_offset": 2048, 00:16:33.683 "data_size": 63488 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "name": "BaseBdev4", 00:16:33.683 "uuid": "71d0ace6-28d5-465c-bf5f-9c9f36f292d4", 00:16:33.683 "is_configured": true, 00:16:33.683 "data_offset": 2048, 00:16:33.683 "data_size": 63488 00:16:33.683 } 00:16:33.683 ] 00:16:33.683 }' 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.683 16:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.943 [2024-12-12 16:13:00.257195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.943 "name": "Existed_Raid", 00:16:33.943 "aliases": [ 00:16:33.943 "869440a1-1876-452b-b2c6-2c06ad3488b6" 00:16:33.943 ], 00:16:33.943 "product_name": "Raid Volume", 00:16:33.943 "block_size": 512, 00:16:33.943 "num_blocks": 190464, 00:16:33.943 "uuid": "869440a1-1876-452b-b2c6-2c06ad3488b6", 00:16:33.943 "assigned_rate_limits": { 00:16:33.943 "rw_ios_per_sec": 0, 00:16:33.943 "rw_mbytes_per_sec": 0, 00:16:33.943 "r_mbytes_per_sec": 0, 00:16:33.943 "w_mbytes_per_sec": 0 00:16:33.943 }, 00:16:33.943 "claimed": false, 00:16:33.943 "zoned": false, 00:16:33.943 "supported_io_types": { 00:16:33.943 "read": true, 00:16:33.943 "write": true, 00:16:33.943 "unmap": false, 00:16:33.943 "flush": false, 00:16:33.943 "reset": true, 00:16:33.943 "nvme_admin": false, 00:16:33.943 "nvme_io": false, 00:16:33.943 "nvme_io_md": false, 00:16:33.943 "write_zeroes": true, 00:16:33.943 "zcopy": false, 00:16:33.943 "get_zone_info": false, 00:16:33.943 "zone_management": false, 00:16:33.943 "zone_append": false, 00:16:33.943 "compare": false, 00:16:33.943 "compare_and_write": false, 00:16:33.943 "abort": false, 00:16:33.943 "seek_hole": false, 00:16:33.943 "seek_data": false, 00:16:33.943 "copy": false, 00:16:33.943 "nvme_iov_md": false 00:16:33.943 }, 00:16:33.943 "driver_specific": { 00:16:33.943 "raid": { 00:16:33.943 "uuid": "869440a1-1876-452b-b2c6-2c06ad3488b6", 00:16:33.943 "strip_size_kb": 64, 00:16:33.943 "state": "online", 00:16:33.943 "raid_level": "raid5f", 00:16:33.943 "superblock": true, 00:16:33.943 "num_base_bdevs": 4, 00:16:33.943 "num_base_bdevs_discovered": 4, 00:16:33.943 "num_base_bdevs_operational": 4, 00:16:33.943 "base_bdevs_list": [ 00:16:33.943 { 00:16:33.943 "name": "BaseBdev1", 00:16:33.943 "uuid": "8adfd3a7-aaf0-4fb6-bcf0-b2032aab651f", 00:16:33.943 "is_configured": true, 00:16:33.943 "data_offset": 2048, 00:16:33.943 "data_size": 63488 00:16:33.943 }, 00:16:33.943 { 00:16:33.943 "name": "BaseBdev2", 00:16:33.943 "uuid": "a3c84463-41d3-45fa-8896-81a49950bafc", 00:16:33.943 "is_configured": true, 00:16:33.943 "data_offset": 2048, 00:16:33.943 "data_size": 63488 00:16:33.943 }, 00:16:33.943 { 00:16:33.943 "name": "BaseBdev3", 00:16:33.943 "uuid": "7ba396ea-dd1b-4dc7-bc8e-d516579338b3", 00:16:33.943 "is_configured": true, 00:16:33.943 "data_offset": 2048, 00:16:33.943 "data_size": 63488 00:16:33.943 }, 00:16:33.943 { 00:16:33.943 "name": "BaseBdev4", 00:16:33.943 "uuid": "71d0ace6-28d5-465c-bf5f-9c9f36f292d4", 00:16:33.943 "is_configured": true, 00:16:33.943 "data_offset": 2048, 00:16:33.943 "data_size": 63488 00:16:33.943 } 00:16:33.943 ] 00:16:33.943 } 00:16:33.943 } 00:16:33.943 }' 00:16:33.943 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:34.203 BaseBdev2 00:16:34.203 BaseBdev3 00:16:34.203 BaseBdev4' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.203 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.463 [2024-12-12 16:13:00.588416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.463 "name": "Existed_Raid", 00:16:34.463 "uuid": "869440a1-1876-452b-b2c6-2c06ad3488b6", 00:16:34.463 "strip_size_kb": 64, 00:16:34.463 "state": "online", 00:16:34.463 "raid_level": "raid5f", 00:16:34.463 "superblock": true, 00:16:34.463 "num_base_bdevs": 4, 00:16:34.463 "num_base_bdevs_discovered": 3, 00:16:34.463 "num_base_bdevs_operational": 3, 00:16:34.463 "base_bdevs_list": [ 00:16:34.463 { 00:16:34.463 "name": null, 00:16:34.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.463 "is_configured": false, 00:16:34.463 "data_offset": 0, 00:16:34.463 "data_size": 63488 00:16:34.463 }, 00:16:34.463 { 00:16:34.463 "name": "BaseBdev2", 00:16:34.463 "uuid": "a3c84463-41d3-45fa-8896-81a49950bafc", 00:16:34.463 "is_configured": true, 00:16:34.463 "data_offset": 2048, 00:16:34.463 "data_size": 63488 00:16:34.463 }, 00:16:34.463 { 00:16:34.463 "name": "BaseBdev3", 00:16:34.463 "uuid": "7ba396ea-dd1b-4dc7-bc8e-d516579338b3", 00:16:34.463 "is_configured": true, 00:16:34.463 "data_offset": 2048, 00:16:34.463 "data_size": 63488 00:16:34.463 }, 00:16:34.463 { 00:16:34.463 "name": "BaseBdev4", 00:16:34.463 "uuid": "71d0ace6-28d5-465c-bf5f-9c9f36f292d4", 00:16:34.463 "is_configured": true, 00:16:34.463 "data_offset": 2048, 00:16:34.463 "data_size": 63488 00:16:34.463 } 00:16:34.463 ] 00:16:34.463 }' 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.463 16:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.032 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.032 [2024-12-12 16:13:01.161093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:35.033 [2024-12-12 16:13:01.161248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.033 [2024-12-12 16:13:01.254651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.033 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.033 [2024-12-12 16:13:01.314577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:35.292 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.292 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:35.292 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.293 [2024-12-12 16:13:01.465772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:35.293 [2024-12-12 16:13:01.465824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.293 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.553 BaseBdev2 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.553 [ 00:16:35.553 { 00:16:35.553 "name": "BaseBdev2", 00:16:35.553 "aliases": [ 00:16:35.553 "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe" 00:16:35.553 ], 00:16:35.553 "product_name": "Malloc disk", 00:16:35.553 "block_size": 512, 00:16:35.553 "num_blocks": 65536, 00:16:35.553 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:35.553 "assigned_rate_limits": { 00:16:35.553 "rw_ios_per_sec": 0, 00:16:35.553 "rw_mbytes_per_sec": 0, 00:16:35.553 "r_mbytes_per_sec": 0, 00:16:35.553 "w_mbytes_per_sec": 0 00:16:35.553 }, 00:16:35.553 "claimed": false, 00:16:35.553 "zoned": false, 00:16:35.553 "supported_io_types": { 00:16:35.553 "read": true, 00:16:35.553 "write": true, 00:16:35.553 "unmap": true, 00:16:35.553 "flush": true, 00:16:35.553 "reset": true, 00:16:35.553 "nvme_admin": false, 00:16:35.553 "nvme_io": false, 00:16:35.553 "nvme_io_md": false, 00:16:35.553 "write_zeroes": true, 00:16:35.553 "zcopy": true, 00:16:35.553 "get_zone_info": false, 00:16:35.553 "zone_management": false, 00:16:35.553 "zone_append": false, 00:16:35.553 "compare": false, 00:16:35.553 "compare_and_write": false, 00:16:35.553 "abort": true, 00:16:35.553 "seek_hole": false, 00:16:35.553 "seek_data": false, 00:16:35.553 "copy": true, 00:16:35.553 "nvme_iov_md": false 00:16:35.553 }, 00:16:35.553 "memory_domains": [ 00:16:35.553 { 00:16:35.553 "dma_device_id": "system", 00:16:35.553 "dma_device_type": 1 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.553 "dma_device_type": 2 00:16:35.553 } 00:16:35.553 ], 00:16:35.553 "driver_specific": {} 00:16:35.553 } 00:16:35.553 ] 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.553 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.553 BaseBdev3 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 [ 00:16:35.554 { 00:16:35.554 "name": "BaseBdev3", 00:16:35.554 "aliases": [ 00:16:35.554 "7cec8e8f-a020-4531-a1e4-1c4037d64586" 00:16:35.554 ], 00:16:35.554 "product_name": "Malloc disk", 00:16:35.554 "block_size": 512, 00:16:35.554 "num_blocks": 65536, 00:16:35.554 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:35.554 "assigned_rate_limits": { 00:16:35.554 "rw_ios_per_sec": 0, 00:16:35.554 "rw_mbytes_per_sec": 0, 00:16:35.554 "r_mbytes_per_sec": 0, 00:16:35.554 "w_mbytes_per_sec": 0 00:16:35.554 }, 00:16:35.554 "claimed": false, 00:16:35.554 "zoned": false, 00:16:35.554 "supported_io_types": { 00:16:35.554 "read": true, 00:16:35.554 "write": true, 00:16:35.554 "unmap": true, 00:16:35.554 "flush": true, 00:16:35.554 "reset": true, 00:16:35.554 "nvme_admin": false, 00:16:35.554 "nvme_io": false, 00:16:35.554 "nvme_io_md": false, 00:16:35.554 "write_zeroes": true, 00:16:35.554 "zcopy": true, 00:16:35.554 "get_zone_info": false, 00:16:35.554 "zone_management": false, 00:16:35.554 "zone_append": false, 00:16:35.554 "compare": false, 00:16:35.554 "compare_and_write": false, 00:16:35.554 "abort": true, 00:16:35.554 "seek_hole": false, 00:16:35.554 "seek_data": false, 00:16:35.554 "copy": true, 00:16:35.554 "nvme_iov_md": false 00:16:35.554 }, 00:16:35.554 "memory_domains": [ 00:16:35.554 { 00:16:35.554 "dma_device_id": "system", 00:16:35.554 "dma_device_type": 1 00:16:35.554 }, 00:16:35.554 { 00:16:35.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.554 "dma_device_type": 2 00:16:35.554 } 00:16:35.554 ], 00:16:35.554 "driver_specific": {} 00:16:35.554 } 00:16:35.554 ] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 BaseBdev4 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 [ 00:16:35.554 { 00:16:35.554 "name": "BaseBdev4", 00:16:35.554 "aliases": [ 00:16:35.554 "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee" 00:16:35.554 ], 00:16:35.554 "product_name": "Malloc disk", 00:16:35.554 "block_size": 512, 00:16:35.554 "num_blocks": 65536, 00:16:35.554 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:35.554 "assigned_rate_limits": { 00:16:35.554 "rw_ios_per_sec": 0, 00:16:35.554 "rw_mbytes_per_sec": 0, 00:16:35.554 "r_mbytes_per_sec": 0, 00:16:35.554 "w_mbytes_per_sec": 0 00:16:35.554 }, 00:16:35.554 "claimed": false, 00:16:35.554 "zoned": false, 00:16:35.554 "supported_io_types": { 00:16:35.554 "read": true, 00:16:35.554 "write": true, 00:16:35.554 "unmap": true, 00:16:35.554 "flush": true, 00:16:35.554 "reset": true, 00:16:35.554 "nvme_admin": false, 00:16:35.554 "nvme_io": false, 00:16:35.554 "nvme_io_md": false, 00:16:35.554 "write_zeroes": true, 00:16:35.554 "zcopy": true, 00:16:35.554 "get_zone_info": false, 00:16:35.554 "zone_management": false, 00:16:35.554 "zone_append": false, 00:16:35.554 "compare": false, 00:16:35.554 "compare_and_write": false, 00:16:35.554 "abort": true, 00:16:35.554 "seek_hole": false, 00:16:35.554 "seek_data": false, 00:16:35.554 "copy": true, 00:16:35.554 "nvme_iov_md": false 00:16:35.554 }, 00:16:35.554 "memory_domains": [ 00:16:35.554 { 00:16:35.554 "dma_device_id": "system", 00:16:35.554 "dma_device_type": 1 00:16:35.554 }, 00:16:35.554 { 00:16:35.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.554 "dma_device_type": 2 00:16:35.554 } 00:16:35.554 ], 00:16:35.554 "driver_specific": {} 00:16:35.554 } 00:16:35.554 ] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 [2024-12-12 16:13:01.867751] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.554 [2024-12-12 16:13:01.867856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.554 [2024-12-12 16:13:01.867913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.554 [2024-12-12 16:13:01.869705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.554 [2024-12-12 16:13:01.869798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.814 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.814 "name": "Existed_Raid", 00:16:35.814 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:35.814 "strip_size_kb": 64, 00:16:35.814 "state": "configuring", 00:16:35.814 "raid_level": "raid5f", 00:16:35.814 "superblock": true, 00:16:35.814 "num_base_bdevs": 4, 00:16:35.814 "num_base_bdevs_discovered": 3, 00:16:35.814 "num_base_bdevs_operational": 4, 00:16:35.814 "base_bdevs_list": [ 00:16:35.814 { 00:16:35.814 "name": "BaseBdev1", 00:16:35.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.814 "is_configured": false, 00:16:35.814 "data_offset": 0, 00:16:35.814 "data_size": 0 00:16:35.814 }, 00:16:35.814 { 00:16:35.814 "name": "BaseBdev2", 00:16:35.814 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:35.814 "is_configured": true, 00:16:35.814 "data_offset": 2048, 00:16:35.814 "data_size": 63488 00:16:35.814 }, 00:16:35.814 { 00:16:35.814 "name": "BaseBdev3", 00:16:35.814 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:35.814 "is_configured": true, 00:16:35.814 "data_offset": 2048, 00:16:35.814 "data_size": 63488 00:16:35.814 }, 00:16:35.814 { 00:16:35.814 "name": "BaseBdev4", 00:16:35.814 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:35.814 "is_configured": true, 00:16:35.814 "data_offset": 2048, 00:16:35.814 "data_size": 63488 00:16:35.814 } 00:16:35.814 ] 00:16:35.814 }' 00:16:35.814 16:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.814 16:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.073 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:36.073 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.073 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.073 [2024-12-12 16:13:02.327763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.073 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.073 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.073 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.073 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.074 "name": "Existed_Raid", 00:16:36.074 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:36.074 "strip_size_kb": 64, 00:16:36.074 "state": "configuring", 00:16:36.074 "raid_level": "raid5f", 00:16:36.074 "superblock": true, 00:16:36.074 "num_base_bdevs": 4, 00:16:36.074 "num_base_bdevs_discovered": 2, 00:16:36.074 "num_base_bdevs_operational": 4, 00:16:36.074 "base_bdevs_list": [ 00:16:36.074 { 00:16:36.074 "name": "BaseBdev1", 00:16:36.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.074 "is_configured": false, 00:16:36.074 "data_offset": 0, 00:16:36.074 "data_size": 0 00:16:36.074 }, 00:16:36.074 { 00:16:36.074 "name": null, 00:16:36.074 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:36.074 "is_configured": false, 00:16:36.074 "data_offset": 0, 00:16:36.074 "data_size": 63488 00:16:36.074 }, 00:16:36.074 { 00:16:36.074 "name": "BaseBdev3", 00:16:36.074 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:36.074 "is_configured": true, 00:16:36.074 "data_offset": 2048, 00:16:36.074 "data_size": 63488 00:16:36.074 }, 00:16:36.074 { 00:16:36.074 "name": "BaseBdev4", 00:16:36.074 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:36.074 "is_configured": true, 00:16:36.074 "data_offset": 2048, 00:16:36.074 "data_size": 63488 00:16:36.074 } 00:16:36.074 ] 00:16:36.074 }' 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.074 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.643 [2024-12-12 16:13:02.863287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.643 BaseBdev1 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.643 [ 00:16:36.643 { 00:16:36.643 "name": "BaseBdev1", 00:16:36.643 "aliases": [ 00:16:36.643 "b8799996-77db-4ed1-a7c8-b8244c207662" 00:16:36.643 ], 00:16:36.643 "product_name": "Malloc disk", 00:16:36.643 "block_size": 512, 00:16:36.643 "num_blocks": 65536, 00:16:36.643 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:36.643 "assigned_rate_limits": { 00:16:36.643 "rw_ios_per_sec": 0, 00:16:36.643 "rw_mbytes_per_sec": 0, 00:16:36.643 "r_mbytes_per_sec": 0, 00:16:36.643 "w_mbytes_per_sec": 0 00:16:36.643 }, 00:16:36.643 "claimed": true, 00:16:36.643 "claim_type": "exclusive_write", 00:16:36.643 "zoned": false, 00:16:36.643 "supported_io_types": { 00:16:36.643 "read": true, 00:16:36.643 "write": true, 00:16:36.643 "unmap": true, 00:16:36.643 "flush": true, 00:16:36.643 "reset": true, 00:16:36.643 "nvme_admin": false, 00:16:36.643 "nvme_io": false, 00:16:36.643 "nvme_io_md": false, 00:16:36.643 "write_zeroes": true, 00:16:36.643 "zcopy": true, 00:16:36.643 "get_zone_info": false, 00:16:36.643 "zone_management": false, 00:16:36.643 "zone_append": false, 00:16:36.643 "compare": false, 00:16:36.643 "compare_and_write": false, 00:16:36.643 "abort": true, 00:16:36.643 "seek_hole": false, 00:16:36.643 "seek_data": false, 00:16:36.643 "copy": true, 00:16:36.643 "nvme_iov_md": false 00:16:36.643 }, 00:16:36.643 "memory_domains": [ 00:16:36.643 { 00:16:36.643 "dma_device_id": "system", 00:16:36.643 "dma_device_type": 1 00:16:36.643 }, 00:16:36.643 { 00:16:36.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.643 "dma_device_type": 2 00:16:36.643 } 00:16:36.643 ], 00:16:36.643 "driver_specific": {} 00:16:36.643 } 00:16:36.643 ] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.643 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.643 "name": "Existed_Raid", 00:16:36.643 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:36.643 "strip_size_kb": 64, 00:16:36.643 "state": "configuring", 00:16:36.643 "raid_level": "raid5f", 00:16:36.643 "superblock": true, 00:16:36.643 "num_base_bdevs": 4, 00:16:36.643 "num_base_bdevs_discovered": 3, 00:16:36.643 "num_base_bdevs_operational": 4, 00:16:36.643 "base_bdevs_list": [ 00:16:36.643 { 00:16:36.643 "name": "BaseBdev1", 00:16:36.643 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:36.643 "is_configured": true, 00:16:36.644 "data_offset": 2048, 00:16:36.644 "data_size": 63488 00:16:36.644 }, 00:16:36.644 { 00:16:36.644 "name": null, 00:16:36.644 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:36.644 "is_configured": false, 00:16:36.644 "data_offset": 0, 00:16:36.644 "data_size": 63488 00:16:36.644 }, 00:16:36.644 { 00:16:36.644 "name": "BaseBdev3", 00:16:36.644 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:36.644 "is_configured": true, 00:16:36.644 "data_offset": 2048, 00:16:36.644 "data_size": 63488 00:16:36.644 }, 00:16:36.644 { 00:16:36.644 "name": "BaseBdev4", 00:16:36.644 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:36.644 "is_configured": true, 00:16:36.644 "data_offset": 2048, 00:16:36.644 "data_size": 63488 00:16:36.644 } 00:16:36.644 ] 00:16:36.644 }' 00:16:36.644 16:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.644 16:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.213 [2024-12-12 16:13:03.370486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.213 "name": "Existed_Raid", 00:16:37.213 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:37.213 "strip_size_kb": 64, 00:16:37.213 "state": "configuring", 00:16:37.213 "raid_level": "raid5f", 00:16:37.213 "superblock": true, 00:16:37.213 "num_base_bdevs": 4, 00:16:37.213 "num_base_bdevs_discovered": 2, 00:16:37.213 "num_base_bdevs_operational": 4, 00:16:37.213 "base_bdevs_list": [ 00:16:37.213 { 00:16:37.213 "name": "BaseBdev1", 00:16:37.213 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:37.213 "is_configured": true, 00:16:37.213 "data_offset": 2048, 00:16:37.213 "data_size": 63488 00:16:37.213 }, 00:16:37.213 { 00:16:37.213 "name": null, 00:16:37.213 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:37.213 "is_configured": false, 00:16:37.213 "data_offset": 0, 00:16:37.213 "data_size": 63488 00:16:37.213 }, 00:16:37.213 { 00:16:37.213 "name": null, 00:16:37.213 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:37.213 "is_configured": false, 00:16:37.213 "data_offset": 0, 00:16:37.213 "data_size": 63488 00:16:37.213 }, 00:16:37.213 { 00:16:37.213 "name": "BaseBdev4", 00:16:37.213 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:37.213 "is_configured": true, 00:16:37.213 "data_offset": 2048, 00:16:37.213 "data_size": 63488 00:16:37.213 } 00:16:37.213 ] 00:16:37.213 }' 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.213 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.783 [2024-12-12 16:13:03.905566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.783 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.783 "name": "Existed_Raid", 00:16:37.783 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:37.783 "strip_size_kb": 64, 00:16:37.783 "state": "configuring", 00:16:37.783 "raid_level": "raid5f", 00:16:37.783 "superblock": true, 00:16:37.783 "num_base_bdevs": 4, 00:16:37.783 "num_base_bdevs_discovered": 3, 00:16:37.783 "num_base_bdevs_operational": 4, 00:16:37.783 "base_bdevs_list": [ 00:16:37.783 { 00:16:37.783 "name": "BaseBdev1", 00:16:37.783 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:37.783 "is_configured": true, 00:16:37.783 "data_offset": 2048, 00:16:37.783 "data_size": 63488 00:16:37.783 }, 00:16:37.783 { 00:16:37.783 "name": null, 00:16:37.783 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:37.783 "is_configured": false, 00:16:37.783 "data_offset": 0, 00:16:37.783 "data_size": 63488 00:16:37.783 }, 00:16:37.783 { 00:16:37.783 "name": "BaseBdev3", 00:16:37.783 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:37.783 "is_configured": true, 00:16:37.783 "data_offset": 2048, 00:16:37.783 "data_size": 63488 00:16:37.783 }, 00:16:37.784 { 00:16:37.784 "name": "BaseBdev4", 00:16:37.784 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:37.784 "is_configured": true, 00:16:37.784 "data_offset": 2048, 00:16:37.784 "data_size": 63488 00:16:37.784 } 00:16:37.784 ] 00:16:37.784 }' 00:16:37.784 16:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.784 16:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.043 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:38.043 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.043 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.043 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.043 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.303 [2024-12-12 16:13:04.420729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.303 "name": "Existed_Raid", 00:16:38.303 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:38.303 "strip_size_kb": 64, 00:16:38.303 "state": "configuring", 00:16:38.303 "raid_level": "raid5f", 00:16:38.303 "superblock": true, 00:16:38.303 "num_base_bdevs": 4, 00:16:38.303 "num_base_bdevs_discovered": 2, 00:16:38.303 "num_base_bdevs_operational": 4, 00:16:38.303 "base_bdevs_list": [ 00:16:38.303 { 00:16:38.303 "name": null, 00:16:38.303 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:38.303 "is_configured": false, 00:16:38.303 "data_offset": 0, 00:16:38.303 "data_size": 63488 00:16:38.303 }, 00:16:38.303 { 00:16:38.303 "name": null, 00:16:38.303 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:38.303 "is_configured": false, 00:16:38.303 "data_offset": 0, 00:16:38.303 "data_size": 63488 00:16:38.303 }, 00:16:38.303 { 00:16:38.303 "name": "BaseBdev3", 00:16:38.303 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:38.303 "is_configured": true, 00:16:38.303 "data_offset": 2048, 00:16:38.303 "data_size": 63488 00:16:38.303 }, 00:16:38.303 { 00:16:38.303 "name": "BaseBdev4", 00:16:38.303 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:38.303 "is_configured": true, 00:16:38.303 "data_offset": 2048, 00:16:38.303 "data_size": 63488 00:16:38.303 } 00:16:38.303 ] 00:16:38.303 }' 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.303 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.563 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.563 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.563 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.563 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.823 [2024-12-12 16:13:04.951869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.823 16:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.823 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.823 "name": "Existed_Raid", 00:16:38.823 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:38.823 "strip_size_kb": 64, 00:16:38.823 "state": "configuring", 00:16:38.823 "raid_level": "raid5f", 00:16:38.823 "superblock": true, 00:16:38.823 "num_base_bdevs": 4, 00:16:38.823 "num_base_bdevs_discovered": 3, 00:16:38.823 "num_base_bdevs_operational": 4, 00:16:38.823 "base_bdevs_list": [ 00:16:38.823 { 00:16:38.823 "name": null, 00:16:38.823 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:38.823 "is_configured": false, 00:16:38.823 "data_offset": 0, 00:16:38.823 "data_size": 63488 00:16:38.823 }, 00:16:38.823 { 00:16:38.823 "name": "BaseBdev2", 00:16:38.823 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:38.823 "is_configured": true, 00:16:38.823 "data_offset": 2048, 00:16:38.823 "data_size": 63488 00:16:38.823 }, 00:16:38.823 { 00:16:38.823 "name": "BaseBdev3", 00:16:38.823 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:38.823 "is_configured": true, 00:16:38.823 "data_offset": 2048, 00:16:38.823 "data_size": 63488 00:16:38.823 }, 00:16:38.823 { 00:16:38.823 "name": "BaseBdev4", 00:16:38.823 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:38.823 "is_configured": true, 00:16:38.823 "data_offset": 2048, 00:16:38.823 "data_size": 63488 00:16:38.823 } 00:16:38.823 ] 00:16:38.823 }' 00:16:38.823 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.823 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b8799996-77db-4ed1-a7c8-b8244c207662 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 [2024-12-12 16:13:05.563019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:39.392 [2024-12-12 16:13:05.563258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:39.392 [2024-12-12 16:13:05.563276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:39.392 [2024-12-12 16:13:05.563524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:39.392 NewBaseBdev 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 [2024-12-12 16:13:05.570585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:39.392 [2024-12-12 16:13:05.570649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:39.392 [2024-12-12 16:13:05.570971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.392 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 [ 00:16:39.392 { 00:16:39.392 "name": "NewBaseBdev", 00:16:39.392 "aliases": [ 00:16:39.392 "b8799996-77db-4ed1-a7c8-b8244c207662" 00:16:39.392 ], 00:16:39.392 "product_name": "Malloc disk", 00:16:39.392 "block_size": 512, 00:16:39.392 "num_blocks": 65536, 00:16:39.392 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:39.392 "assigned_rate_limits": { 00:16:39.392 "rw_ios_per_sec": 0, 00:16:39.392 "rw_mbytes_per_sec": 0, 00:16:39.392 "r_mbytes_per_sec": 0, 00:16:39.392 "w_mbytes_per_sec": 0 00:16:39.392 }, 00:16:39.392 "claimed": true, 00:16:39.392 "claim_type": "exclusive_write", 00:16:39.392 "zoned": false, 00:16:39.392 "supported_io_types": { 00:16:39.392 "read": true, 00:16:39.392 "write": true, 00:16:39.392 "unmap": true, 00:16:39.392 "flush": true, 00:16:39.392 "reset": true, 00:16:39.392 "nvme_admin": false, 00:16:39.392 "nvme_io": false, 00:16:39.392 "nvme_io_md": false, 00:16:39.392 "write_zeroes": true, 00:16:39.392 "zcopy": true, 00:16:39.392 "get_zone_info": false, 00:16:39.393 "zone_management": false, 00:16:39.393 "zone_append": false, 00:16:39.393 "compare": false, 00:16:39.393 "compare_and_write": false, 00:16:39.393 "abort": true, 00:16:39.393 "seek_hole": false, 00:16:39.393 "seek_data": false, 00:16:39.393 "copy": true, 00:16:39.393 "nvme_iov_md": false 00:16:39.393 }, 00:16:39.393 "memory_domains": [ 00:16:39.393 { 00:16:39.393 "dma_device_id": "system", 00:16:39.393 "dma_device_type": 1 00:16:39.393 }, 00:16:39.393 { 00:16:39.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.393 "dma_device_type": 2 00:16:39.393 } 00:16:39.393 ], 00:16:39.393 "driver_specific": {} 00:16:39.393 } 00:16:39.393 ] 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.393 "name": "Existed_Raid", 00:16:39.393 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:39.393 "strip_size_kb": 64, 00:16:39.393 "state": "online", 00:16:39.393 "raid_level": "raid5f", 00:16:39.393 "superblock": true, 00:16:39.393 "num_base_bdevs": 4, 00:16:39.393 "num_base_bdevs_discovered": 4, 00:16:39.393 "num_base_bdevs_operational": 4, 00:16:39.393 "base_bdevs_list": [ 00:16:39.393 { 00:16:39.393 "name": "NewBaseBdev", 00:16:39.393 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:39.393 "is_configured": true, 00:16:39.393 "data_offset": 2048, 00:16:39.393 "data_size": 63488 00:16:39.393 }, 00:16:39.393 { 00:16:39.393 "name": "BaseBdev2", 00:16:39.393 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:39.393 "is_configured": true, 00:16:39.393 "data_offset": 2048, 00:16:39.393 "data_size": 63488 00:16:39.393 }, 00:16:39.393 { 00:16:39.393 "name": "BaseBdev3", 00:16:39.393 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:39.393 "is_configured": true, 00:16:39.393 "data_offset": 2048, 00:16:39.393 "data_size": 63488 00:16:39.393 }, 00:16:39.393 { 00:16:39.393 "name": "BaseBdev4", 00:16:39.393 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:39.393 "is_configured": true, 00:16:39.393 "data_offset": 2048, 00:16:39.393 "data_size": 63488 00:16:39.393 } 00:16:39.393 ] 00:16:39.393 }' 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.393 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.653 [2024-12-12 16:13:05.962404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.653 "name": "Existed_Raid", 00:16:39.653 "aliases": [ 00:16:39.653 "50086253-a124-40e2-8cf8-0c21cad673e4" 00:16:39.653 ], 00:16:39.653 "product_name": "Raid Volume", 00:16:39.653 "block_size": 512, 00:16:39.653 "num_blocks": 190464, 00:16:39.653 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:39.653 "assigned_rate_limits": { 00:16:39.653 "rw_ios_per_sec": 0, 00:16:39.653 "rw_mbytes_per_sec": 0, 00:16:39.653 "r_mbytes_per_sec": 0, 00:16:39.653 "w_mbytes_per_sec": 0 00:16:39.653 }, 00:16:39.653 "claimed": false, 00:16:39.653 "zoned": false, 00:16:39.653 "supported_io_types": { 00:16:39.653 "read": true, 00:16:39.653 "write": true, 00:16:39.653 "unmap": false, 00:16:39.653 "flush": false, 00:16:39.653 "reset": true, 00:16:39.653 "nvme_admin": false, 00:16:39.653 "nvme_io": false, 00:16:39.653 "nvme_io_md": false, 00:16:39.653 "write_zeroes": true, 00:16:39.653 "zcopy": false, 00:16:39.653 "get_zone_info": false, 00:16:39.653 "zone_management": false, 00:16:39.653 "zone_append": false, 00:16:39.653 "compare": false, 00:16:39.653 "compare_and_write": false, 00:16:39.653 "abort": false, 00:16:39.653 "seek_hole": false, 00:16:39.653 "seek_data": false, 00:16:39.653 "copy": false, 00:16:39.653 "nvme_iov_md": false 00:16:39.653 }, 00:16:39.653 "driver_specific": { 00:16:39.653 "raid": { 00:16:39.653 "uuid": "50086253-a124-40e2-8cf8-0c21cad673e4", 00:16:39.653 "strip_size_kb": 64, 00:16:39.653 "state": "online", 00:16:39.653 "raid_level": "raid5f", 00:16:39.653 "superblock": true, 00:16:39.653 "num_base_bdevs": 4, 00:16:39.653 "num_base_bdevs_discovered": 4, 00:16:39.653 "num_base_bdevs_operational": 4, 00:16:39.653 "base_bdevs_list": [ 00:16:39.653 { 00:16:39.653 "name": "NewBaseBdev", 00:16:39.653 "uuid": "b8799996-77db-4ed1-a7c8-b8244c207662", 00:16:39.653 "is_configured": true, 00:16:39.653 "data_offset": 2048, 00:16:39.653 "data_size": 63488 00:16:39.653 }, 00:16:39.653 { 00:16:39.653 "name": "BaseBdev2", 00:16:39.653 "uuid": "e1ff4a75-694a-4c9f-99b8-83c1d869ecbe", 00:16:39.653 "is_configured": true, 00:16:39.653 "data_offset": 2048, 00:16:39.653 "data_size": 63488 00:16:39.653 }, 00:16:39.653 { 00:16:39.653 "name": "BaseBdev3", 00:16:39.653 "uuid": "7cec8e8f-a020-4531-a1e4-1c4037d64586", 00:16:39.653 "is_configured": true, 00:16:39.653 "data_offset": 2048, 00:16:39.653 "data_size": 63488 00:16:39.653 }, 00:16:39.653 { 00:16:39.653 "name": "BaseBdev4", 00:16:39.653 "uuid": "27bd9eff-fbcc-44ae-ba2a-1eac04c9f6ee", 00:16:39.653 "is_configured": true, 00:16:39.653 "data_offset": 2048, 00:16:39.653 "data_size": 63488 00:16:39.653 } 00:16:39.653 ] 00:16:39.653 } 00:16:39.653 } 00:16:39.653 }' 00:16:39.653 16:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:39.913 BaseBdev2 00:16:39.913 BaseBdev3 00:16:39.913 BaseBdev4' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.913 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.914 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.914 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.914 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.914 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.914 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:39.914 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.914 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.173 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.173 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.173 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.173 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:40.173 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.173 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.173 [2024-12-12 16:13:06.297594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.173 [2024-12-12 16:13:06.297625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.173 [2024-12-12 16:13:06.297698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.173 [2024-12-12 16:13:06.297999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.173 [2024-12-12 16:13:06.298011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:40.173 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 85530 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85530 ']' 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 85530 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85530 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.174 killing process with pid 85530 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85530' 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 85530 00:16:40.174 [2024-12-12 16:13:06.340532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.174 16:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 85530 00:16:40.433 [2024-12-12 16:13:06.716572] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.814 16:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:41.814 00:16:41.814 real 0m11.591s 00:16:41.814 user 0m18.401s 00:16:41.814 sys 0m2.147s 00:16:41.814 16:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.814 16:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.814 ************************************ 00:16:41.814 END TEST raid5f_state_function_test_sb 00:16:41.814 ************************************ 00:16:41.814 16:13:07 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:41.814 16:13:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:41.814 16:13:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.814 16:13:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.814 ************************************ 00:16:41.814 START TEST raid5f_superblock_test 00:16:41.814 ************************************ 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=86203 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:41.814 16:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 86203 00:16:41.815 16:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 86203 ']' 00:16:41.815 16:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.815 16:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.815 16:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.815 16:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.815 16:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.815 [2024-12-12 16:13:07.942218] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:41.815 [2024-12-12 16:13:07.942442] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86203 ] 00:16:41.815 [2024-12-12 16:13:08.116831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.074 [2024-12-12 16:13:08.230291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.074 [2024-12-12 16:13:08.420729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.074 [2024-12-12 16:13:08.420861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.663 malloc1 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.663 [2024-12-12 16:13:08.823773] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.663 [2024-12-12 16:13:08.823832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.663 [2024-12-12 16:13:08.823854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:42.663 [2024-12-12 16:13:08.823863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.663 [2024-12-12 16:13:08.825883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.663 [2024-12-12 16:13:08.825996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.663 pt1 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.663 malloc2 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.663 [2024-12-12 16:13:08.878457] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:42.663 [2024-12-12 16:13:08.878573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.663 [2024-12-12 16:13:08.878614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:42.663 [2024-12-12 16:13:08.878652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.663 [2024-12-12 16:13:08.880710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.663 [2024-12-12 16:13:08.880782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:42.663 pt2 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.663 malloc3 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.663 [2024-12-12 16:13:08.960056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:42.663 [2024-12-12 16:13:08.960155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.663 [2024-12-12 16:13:08.960196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:42.663 [2024-12-12 16:13:08.960226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.663 [2024-12-12 16:13:08.962274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.663 [2024-12-12 16:13:08.962344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:42.663 pt3 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.663 16:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.933 malloc4 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.933 [2024-12-12 16:13:09.018674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:42.933 [2024-12-12 16:13:09.018734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.933 [2024-12-12 16:13:09.018765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:42.933 [2024-12-12 16:13:09.018774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.933 [2024-12-12 16:13:09.021146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.933 [2024-12-12 16:13:09.021183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:42.933 pt4 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.933 [2024-12-12 16:13:09.030674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.933 [2024-12-12 16:13:09.032478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.933 [2024-12-12 16:13:09.032561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:42.933 [2024-12-12 16:13:09.032608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:42.933 [2024-12-12 16:13:09.032786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:42.933 [2024-12-12 16:13:09.032801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:42.933 [2024-12-12 16:13:09.033064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:42.933 [2024-12-12 16:13:09.039822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:42.933 [2024-12-12 16:13:09.039847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:42.933 [2024-12-12 16:13:09.040059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.933 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.933 "name": "raid_bdev1", 00:16:42.933 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:42.933 "strip_size_kb": 64, 00:16:42.933 "state": "online", 00:16:42.933 "raid_level": "raid5f", 00:16:42.933 "superblock": true, 00:16:42.933 "num_base_bdevs": 4, 00:16:42.933 "num_base_bdevs_discovered": 4, 00:16:42.933 "num_base_bdevs_operational": 4, 00:16:42.933 "base_bdevs_list": [ 00:16:42.933 { 00:16:42.933 "name": "pt1", 00:16:42.933 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.933 "is_configured": true, 00:16:42.933 "data_offset": 2048, 00:16:42.933 "data_size": 63488 00:16:42.933 }, 00:16:42.933 { 00:16:42.933 "name": "pt2", 00:16:42.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.933 "is_configured": true, 00:16:42.933 "data_offset": 2048, 00:16:42.933 "data_size": 63488 00:16:42.933 }, 00:16:42.933 { 00:16:42.933 "name": "pt3", 00:16:42.933 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.934 "is_configured": true, 00:16:42.934 "data_offset": 2048, 00:16:42.934 "data_size": 63488 00:16:42.934 }, 00:16:42.934 { 00:16:42.934 "name": "pt4", 00:16:42.934 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.934 "is_configured": true, 00:16:42.934 "data_offset": 2048, 00:16:42.934 "data_size": 63488 00:16:42.934 } 00:16:42.934 ] 00:16:42.934 }' 00:16:42.934 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.934 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.206 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.206 [2024-12-12 16:13:09.475926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.207 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.207 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:43.207 "name": "raid_bdev1", 00:16:43.207 "aliases": [ 00:16:43.207 "81cf0071-acfb-4e44-a36a-85163b41daca" 00:16:43.207 ], 00:16:43.207 "product_name": "Raid Volume", 00:16:43.207 "block_size": 512, 00:16:43.207 "num_blocks": 190464, 00:16:43.207 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:43.207 "assigned_rate_limits": { 00:16:43.207 "rw_ios_per_sec": 0, 00:16:43.207 "rw_mbytes_per_sec": 0, 00:16:43.207 "r_mbytes_per_sec": 0, 00:16:43.207 "w_mbytes_per_sec": 0 00:16:43.207 }, 00:16:43.207 "claimed": false, 00:16:43.207 "zoned": false, 00:16:43.207 "supported_io_types": { 00:16:43.207 "read": true, 00:16:43.207 "write": true, 00:16:43.207 "unmap": false, 00:16:43.207 "flush": false, 00:16:43.207 "reset": true, 00:16:43.207 "nvme_admin": false, 00:16:43.207 "nvme_io": false, 00:16:43.207 "nvme_io_md": false, 00:16:43.207 "write_zeroes": true, 00:16:43.207 "zcopy": false, 00:16:43.207 "get_zone_info": false, 00:16:43.207 "zone_management": false, 00:16:43.207 "zone_append": false, 00:16:43.207 "compare": false, 00:16:43.207 "compare_and_write": false, 00:16:43.207 "abort": false, 00:16:43.207 "seek_hole": false, 00:16:43.207 "seek_data": false, 00:16:43.207 "copy": false, 00:16:43.207 "nvme_iov_md": false 00:16:43.207 }, 00:16:43.207 "driver_specific": { 00:16:43.207 "raid": { 00:16:43.207 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:43.207 "strip_size_kb": 64, 00:16:43.207 "state": "online", 00:16:43.207 "raid_level": "raid5f", 00:16:43.207 "superblock": true, 00:16:43.207 "num_base_bdevs": 4, 00:16:43.207 "num_base_bdevs_discovered": 4, 00:16:43.207 "num_base_bdevs_operational": 4, 00:16:43.207 "base_bdevs_list": [ 00:16:43.207 { 00:16:43.207 "name": "pt1", 00:16:43.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.207 "is_configured": true, 00:16:43.207 "data_offset": 2048, 00:16:43.207 "data_size": 63488 00:16:43.207 }, 00:16:43.207 { 00:16:43.207 "name": "pt2", 00:16:43.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.207 "is_configured": true, 00:16:43.207 "data_offset": 2048, 00:16:43.207 "data_size": 63488 00:16:43.207 }, 00:16:43.207 { 00:16:43.207 "name": "pt3", 00:16:43.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.207 "is_configured": true, 00:16:43.207 "data_offset": 2048, 00:16:43.207 "data_size": 63488 00:16:43.207 }, 00:16:43.207 { 00:16:43.207 "name": "pt4", 00:16:43.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:43.207 "is_configured": true, 00:16:43.207 "data_offset": 2048, 00:16:43.207 "data_size": 63488 00:16:43.207 } 00:16:43.207 ] 00:16:43.207 } 00:16:43.207 } 00:16:43.207 }' 00:16:43.207 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.207 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:43.207 pt2 00:16:43.207 pt3 00:16:43.207 pt4' 00:16:43.207 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:43.467 [2024-12-12 16:13:09.783398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.467 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=81cf0071-acfb-4e44-a36a-85163b41daca 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 81cf0071-acfb-4e44-a36a-85163b41daca ']' 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 [2024-12-12 16:13:09.827135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.727 [2024-12-12 16:13:09.827199] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.727 [2024-12-12 16:13:09.827304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.727 [2024-12-12 16:13:09.827418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.727 [2024-12-12 16:13:09.827468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.727 [2024-12-12 16:13:09.966933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:43.727 [2024-12-12 16:13:09.968721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:43.727 [2024-12-12 16:13:09.968778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:43.727 [2024-12-12 16:13:09.968809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:43.727 [2024-12-12 16:13:09.968856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:43.727 [2024-12-12 16:13:09.968916] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:43.727 [2024-12-12 16:13:09.968935] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:43.727 [2024-12-12 16:13:09.968953] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:43.727 [2024-12-12 16:13:09.968985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.727 [2024-12-12 16:13:09.968995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:43.727 request: 00:16:43.727 { 00:16:43.727 "name": "raid_bdev1", 00:16:43.727 "raid_level": "raid5f", 00:16:43.727 "base_bdevs": [ 00:16:43.727 "malloc1", 00:16:43.727 "malloc2", 00:16:43.727 "malloc3", 00:16:43.727 "malloc4" 00:16:43.727 ], 00:16:43.727 "strip_size_kb": 64, 00:16:43.727 "superblock": false, 00:16:43.727 "method": "bdev_raid_create", 00:16:43.727 "req_id": 1 00:16:43.727 } 00:16:43.727 Got JSON-RPC error response 00:16:43.727 response: 00:16:43.727 { 00:16:43.727 "code": -17, 00:16:43.727 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:43.727 } 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.727 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.728 16:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.728 [2024-12-12 16:13:10.030772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.728 [2024-12-12 16:13:10.030860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.728 [2024-12-12 16:13:10.030901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:43.728 [2024-12-12 16:13:10.030931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.728 [2024-12-12 16:13:10.033014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.728 [2024-12-12 16:13:10.033087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.728 [2024-12-12 16:13:10.033180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:43.728 [2024-12-12 16:13:10.033247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:43.728 pt1 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.728 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.987 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.987 "name": "raid_bdev1", 00:16:43.987 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:43.987 "strip_size_kb": 64, 00:16:43.987 "state": "configuring", 00:16:43.987 "raid_level": "raid5f", 00:16:43.987 "superblock": true, 00:16:43.987 "num_base_bdevs": 4, 00:16:43.987 "num_base_bdevs_discovered": 1, 00:16:43.987 "num_base_bdevs_operational": 4, 00:16:43.987 "base_bdevs_list": [ 00:16:43.987 { 00:16:43.987 "name": "pt1", 00:16:43.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.987 "is_configured": true, 00:16:43.987 "data_offset": 2048, 00:16:43.987 "data_size": 63488 00:16:43.987 }, 00:16:43.987 { 00:16:43.987 "name": null, 00:16:43.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.987 "is_configured": false, 00:16:43.987 "data_offset": 2048, 00:16:43.987 "data_size": 63488 00:16:43.987 }, 00:16:43.987 { 00:16:43.987 "name": null, 00:16:43.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.987 "is_configured": false, 00:16:43.987 "data_offset": 2048, 00:16:43.987 "data_size": 63488 00:16:43.987 }, 00:16:43.987 { 00:16:43.987 "name": null, 00:16:43.987 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:43.987 "is_configured": false, 00:16:43.987 "data_offset": 2048, 00:16:43.987 "data_size": 63488 00:16:43.987 } 00:16:43.987 ] 00:16:43.987 }' 00:16:43.987 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.987 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.247 [2024-12-12 16:13:10.490060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.247 [2024-12-12 16:13:10.490186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.247 [2024-12-12 16:13:10.490212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:44.247 [2024-12-12 16:13:10.490223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.247 [2024-12-12 16:13:10.490649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.247 [2024-12-12 16:13:10.490678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.247 [2024-12-12 16:13:10.490761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:44.247 [2024-12-12 16:13:10.490786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.247 pt2 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.247 [2024-12-12 16:13:10.498050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.247 "name": "raid_bdev1", 00:16:44.247 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:44.247 "strip_size_kb": 64, 00:16:44.247 "state": "configuring", 00:16:44.247 "raid_level": "raid5f", 00:16:44.247 "superblock": true, 00:16:44.247 "num_base_bdevs": 4, 00:16:44.247 "num_base_bdevs_discovered": 1, 00:16:44.247 "num_base_bdevs_operational": 4, 00:16:44.247 "base_bdevs_list": [ 00:16:44.247 { 00:16:44.247 "name": "pt1", 00:16:44.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.247 "is_configured": true, 00:16:44.247 "data_offset": 2048, 00:16:44.247 "data_size": 63488 00:16:44.247 }, 00:16:44.247 { 00:16:44.247 "name": null, 00:16:44.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.247 "is_configured": false, 00:16:44.247 "data_offset": 0, 00:16:44.247 "data_size": 63488 00:16:44.247 }, 00:16:44.247 { 00:16:44.247 "name": null, 00:16:44.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.247 "is_configured": false, 00:16:44.247 "data_offset": 2048, 00:16:44.247 "data_size": 63488 00:16:44.247 }, 00:16:44.247 { 00:16:44.247 "name": null, 00:16:44.247 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.247 "is_configured": false, 00:16:44.247 "data_offset": 2048, 00:16:44.247 "data_size": 63488 00:16:44.247 } 00:16:44.247 ] 00:16:44.247 }' 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.247 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.817 [2024-12-12 16:13:10.933301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.817 [2024-12-12 16:13:10.933370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.817 [2024-12-12 16:13:10.933391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:44.817 [2024-12-12 16:13:10.933400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.817 [2024-12-12 16:13:10.933837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.817 [2024-12-12 16:13:10.933854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.817 [2024-12-12 16:13:10.933952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:44.817 [2024-12-12 16:13:10.933976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.817 pt2 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.817 [2024-12-12 16:13:10.945248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:44.817 [2024-12-12 16:13:10.945298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.817 [2024-12-12 16:13:10.945316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:44.817 [2024-12-12 16:13:10.945324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.817 [2024-12-12 16:13:10.945682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.817 [2024-12-12 16:13:10.945697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:44.817 [2024-12-12 16:13:10.945758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:44.817 [2024-12-12 16:13:10.945781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.817 pt3 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.817 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.817 [2024-12-12 16:13:10.953205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:44.817 [2024-12-12 16:13:10.953246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.817 [2024-12-12 16:13:10.953261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:44.817 [2024-12-12 16:13:10.953268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.817 [2024-12-12 16:13:10.953631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.817 [2024-12-12 16:13:10.953651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:44.817 [2024-12-12 16:13:10.953707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:44.818 [2024-12-12 16:13:10.953727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:44.818 [2024-12-12 16:13:10.953850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:44.818 [2024-12-12 16:13:10.953858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.818 [2024-12-12 16:13:10.954106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:44.818 [2024-12-12 16:13:10.960825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:44.818 [2024-12-12 16:13:10.960849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:44.818 [2024-12-12 16:13:10.961054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.818 pt4 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.818 16:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.818 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.818 "name": "raid_bdev1", 00:16:44.818 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:44.818 "strip_size_kb": 64, 00:16:44.818 "state": "online", 00:16:44.818 "raid_level": "raid5f", 00:16:44.818 "superblock": true, 00:16:44.818 "num_base_bdevs": 4, 00:16:44.818 "num_base_bdevs_discovered": 4, 00:16:44.818 "num_base_bdevs_operational": 4, 00:16:44.818 "base_bdevs_list": [ 00:16:44.818 { 00:16:44.818 "name": "pt1", 00:16:44.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.818 "is_configured": true, 00:16:44.818 "data_offset": 2048, 00:16:44.818 "data_size": 63488 00:16:44.818 }, 00:16:44.818 { 00:16:44.818 "name": "pt2", 00:16:44.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.818 "is_configured": true, 00:16:44.818 "data_offset": 2048, 00:16:44.818 "data_size": 63488 00:16:44.818 }, 00:16:44.818 { 00:16:44.818 "name": "pt3", 00:16:44.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.818 "is_configured": true, 00:16:44.818 "data_offset": 2048, 00:16:44.818 "data_size": 63488 00:16:44.818 }, 00:16:44.818 { 00:16:44.818 "name": "pt4", 00:16:44.818 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.818 "is_configured": true, 00:16:44.818 "data_offset": 2048, 00:16:44.818 "data_size": 63488 00:16:44.818 } 00:16:44.818 ] 00:16:44.818 }' 00:16:44.818 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.818 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.077 [2024-12-12 16:13:11.400718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.077 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:45.337 "name": "raid_bdev1", 00:16:45.337 "aliases": [ 00:16:45.337 "81cf0071-acfb-4e44-a36a-85163b41daca" 00:16:45.337 ], 00:16:45.337 "product_name": "Raid Volume", 00:16:45.337 "block_size": 512, 00:16:45.337 "num_blocks": 190464, 00:16:45.337 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:45.337 "assigned_rate_limits": { 00:16:45.337 "rw_ios_per_sec": 0, 00:16:45.337 "rw_mbytes_per_sec": 0, 00:16:45.337 "r_mbytes_per_sec": 0, 00:16:45.337 "w_mbytes_per_sec": 0 00:16:45.337 }, 00:16:45.337 "claimed": false, 00:16:45.337 "zoned": false, 00:16:45.337 "supported_io_types": { 00:16:45.337 "read": true, 00:16:45.337 "write": true, 00:16:45.337 "unmap": false, 00:16:45.337 "flush": false, 00:16:45.337 "reset": true, 00:16:45.337 "nvme_admin": false, 00:16:45.337 "nvme_io": false, 00:16:45.337 "nvme_io_md": false, 00:16:45.337 "write_zeroes": true, 00:16:45.337 "zcopy": false, 00:16:45.337 "get_zone_info": false, 00:16:45.337 "zone_management": false, 00:16:45.337 "zone_append": false, 00:16:45.337 "compare": false, 00:16:45.337 "compare_and_write": false, 00:16:45.337 "abort": false, 00:16:45.337 "seek_hole": false, 00:16:45.337 "seek_data": false, 00:16:45.337 "copy": false, 00:16:45.337 "nvme_iov_md": false 00:16:45.337 }, 00:16:45.337 "driver_specific": { 00:16:45.337 "raid": { 00:16:45.337 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:45.337 "strip_size_kb": 64, 00:16:45.337 "state": "online", 00:16:45.337 "raid_level": "raid5f", 00:16:45.337 "superblock": true, 00:16:45.337 "num_base_bdevs": 4, 00:16:45.337 "num_base_bdevs_discovered": 4, 00:16:45.337 "num_base_bdevs_operational": 4, 00:16:45.337 "base_bdevs_list": [ 00:16:45.337 { 00:16:45.337 "name": "pt1", 00:16:45.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:45.337 "is_configured": true, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 }, 00:16:45.337 { 00:16:45.337 "name": "pt2", 00:16:45.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.337 "is_configured": true, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 }, 00:16:45.337 { 00:16:45.337 "name": "pt3", 00:16:45.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.337 "is_configured": true, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 }, 00:16:45.337 { 00:16:45.337 "name": "pt4", 00:16:45.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.337 "is_configured": true, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 } 00:16:45.337 ] 00:16:45.337 } 00:16:45.337 } 00:16:45.337 }' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:45.337 pt2 00:16:45.337 pt3 00:16:45.337 pt4' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.597 [2024-12-12 16:13:11.756090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 81cf0071-acfb-4e44-a36a-85163b41daca '!=' 81cf0071-acfb-4e44-a36a-85163b41daca ']' 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.597 [2024-12-12 16:13:11.799879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.597 "name": "raid_bdev1", 00:16:45.597 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:45.597 "strip_size_kb": 64, 00:16:45.597 "state": "online", 00:16:45.597 "raid_level": "raid5f", 00:16:45.597 "superblock": true, 00:16:45.597 "num_base_bdevs": 4, 00:16:45.597 "num_base_bdevs_discovered": 3, 00:16:45.597 "num_base_bdevs_operational": 3, 00:16:45.597 "base_bdevs_list": [ 00:16:45.597 { 00:16:45.597 "name": null, 00:16:45.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.597 "is_configured": false, 00:16:45.597 "data_offset": 0, 00:16:45.597 "data_size": 63488 00:16:45.597 }, 00:16:45.597 { 00:16:45.597 "name": "pt2", 00:16:45.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.597 "is_configured": true, 00:16:45.597 "data_offset": 2048, 00:16:45.597 "data_size": 63488 00:16:45.597 }, 00:16:45.597 { 00:16:45.597 "name": "pt3", 00:16:45.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.597 "is_configured": true, 00:16:45.597 "data_offset": 2048, 00:16:45.597 "data_size": 63488 00:16:45.597 }, 00:16:45.597 { 00:16:45.597 "name": "pt4", 00:16:45.597 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.597 "is_configured": true, 00:16:45.597 "data_offset": 2048, 00:16:45.597 "data_size": 63488 00:16:45.597 } 00:16:45.597 ] 00:16:45.597 }' 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.597 16:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 [2024-12-12 16:13:12.283500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.167 [2024-12-12 16:13:12.283533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.167 [2024-12-12 16:13:12.283646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.167 [2024-12-12 16:13:12.283734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.167 [2024-12-12 16:13:12.283744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 [2024-12-12 16:13:12.379308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:46.167 [2024-12-12 16:13:12.379360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.167 [2024-12-12 16:13:12.379378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:46.167 [2024-12-12 16:13:12.379387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.167 [2024-12-12 16:13:12.381519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.167 [2024-12-12 16:13:12.381557] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:46.167 [2024-12-12 16:13:12.381634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:46.167 [2024-12-12 16:13:12.381681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.167 pt2 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.167 "name": "raid_bdev1", 00:16:46.167 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:46.167 "strip_size_kb": 64, 00:16:46.167 "state": "configuring", 00:16:46.167 "raid_level": "raid5f", 00:16:46.167 "superblock": true, 00:16:46.167 "num_base_bdevs": 4, 00:16:46.167 "num_base_bdevs_discovered": 1, 00:16:46.167 "num_base_bdevs_operational": 3, 00:16:46.167 "base_bdevs_list": [ 00:16:46.167 { 00:16:46.167 "name": null, 00:16:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.167 "is_configured": false, 00:16:46.167 "data_offset": 2048, 00:16:46.167 "data_size": 63488 00:16:46.167 }, 00:16:46.167 { 00:16:46.167 "name": "pt2", 00:16:46.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.167 "is_configured": true, 00:16:46.167 "data_offset": 2048, 00:16:46.167 "data_size": 63488 00:16:46.167 }, 00:16:46.167 { 00:16:46.167 "name": null, 00:16:46.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.167 "is_configured": false, 00:16:46.167 "data_offset": 2048, 00:16:46.167 "data_size": 63488 00:16:46.167 }, 00:16:46.167 { 00:16:46.167 "name": null, 00:16:46.167 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.167 "is_configured": false, 00:16:46.167 "data_offset": 2048, 00:16:46.167 "data_size": 63488 00:16:46.167 } 00:16:46.167 ] 00:16:46.167 }' 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.167 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.736 [2024-12-12 16:13:12.842563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:46.736 [2024-12-12 16:13:12.842695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.736 [2024-12-12 16:13:12.842742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:46.736 [2024-12-12 16:13:12.842773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.736 [2024-12-12 16:13:12.843227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.736 [2024-12-12 16:13:12.843298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:46.736 [2024-12-12 16:13:12.843414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:46.736 [2024-12-12 16:13:12.843465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:46.736 pt3 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.736 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.736 "name": "raid_bdev1", 00:16:46.736 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:46.736 "strip_size_kb": 64, 00:16:46.736 "state": "configuring", 00:16:46.736 "raid_level": "raid5f", 00:16:46.736 "superblock": true, 00:16:46.736 "num_base_bdevs": 4, 00:16:46.736 "num_base_bdevs_discovered": 2, 00:16:46.736 "num_base_bdevs_operational": 3, 00:16:46.736 "base_bdevs_list": [ 00:16:46.736 { 00:16:46.736 "name": null, 00:16:46.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.736 "is_configured": false, 00:16:46.736 "data_offset": 2048, 00:16:46.736 "data_size": 63488 00:16:46.736 }, 00:16:46.736 { 00:16:46.736 "name": "pt2", 00:16:46.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.736 "is_configured": true, 00:16:46.736 "data_offset": 2048, 00:16:46.736 "data_size": 63488 00:16:46.736 }, 00:16:46.736 { 00:16:46.736 "name": "pt3", 00:16:46.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.736 "is_configured": true, 00:16:46.736 "data_offset": 2048, 00:16:46.736 "data_size": 63488 00:16:46.736 }, 00:16:46.736 { 00:16:46.736 "name": null, 00:16:46.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.737 "is_configured": false, 00:16:46.737 "data_offset": 2048, 00:16:46.737 "data_size": 63488 00:16:46.737 } 00:16:46.737 ] 00:16:46.737 }' 00:16:46.737 16:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.737 16:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.997 [2024-12-12 16:13:13.289801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:46.997 [2024-12-12 16:13:13.289865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.997 [2024-12-12 16:13:13.289904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:46.997 [2024-12-12 16:13:13.289913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.997 [2024-12-12 16:13:13.290368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.997 [2024-12-12 16:13:13.290393] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:46.997 [2024-12-12 16:13:13.290479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:46.997 [2024-12-12 16:13:13.290508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:46.997 [2024-12-12 16:13:13.290648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:46.997 [2024-12-12 16:13:13.290657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:46.997 [2024-12-12 16:13:13.290893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:46.997 [2024-12-12 16:13:13.298066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:46.997 [2024-12-12 16:13:13.298095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:46.997 [2024-12-12 16:13:13.298424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.997 pt4 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.997 "name": "raid_bdev1", 00:16:46.997 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:46.997 "strip_size_kb": 64, 00:16:46.997 "state": "online", 00:16:46.997 "raid_level": "raid5f", 00:16:46.997 "superblock": true, 00:16:46.997 "num_base_bdevs": 4, 00:16:46.997 "num_base_bdevs_discovered": 3, 00:16:46.997 "num_base_bdevs_operational": 3, 00:16:46.997 "base_bdevs_list": [ 00:16:46.997 { 00:16:46.997 "name": null, 00:16:46.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.997 "is_configured": false, 00:16:46.997 "data_offset": 2048, 00:16:46.997 "data_size": 63488 00:16:46.997 }, 00:16:46.997 { 00:16:46.997 "name": "pt2", 00:16:46.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.997 "is_configured": true, 00:16:46.997 "data_offset": 2048, 00:16:46.997 "data_size": 63488 00:16:46.997 }, 00:16:46.997 { 00:16:46.997 "name": "pt3", 00:16:46.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.997 "is_configured": true, 00:16:46.997 "data_offset": 2048, 00:16:46.997 "data_size": 63488 00:16:46.997 }, 00:16:46.997 { 00:16:46.997 "name": "pt4", 00:16:46.997 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.997 "is_configured": true, 00:16:46.997 "data_offset": 2048, 00:16:46.997 "data_size": 63488 00:16:46.997 } 00:16:46.997 ] 00:16:46.997 }' 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.997 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.566 [2024-12-12 16:13:13.726914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.566 [2024-12-12 16:13:13.726996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.566 [2024-12-12 16:13:13.727102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.566 [2024-12-12 16:13:13.727196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.566 [2024-12-12 16:13:13.727247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.566 [2024-12-12 16:13:13.782791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.566 [2024-12-12 16:13:13.782848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.566 [2024-12-12 16:13:13.782874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:47.566 [2024-12-12 16:13:13.782885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.566 [2024-12-12 16:13:13.785162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.566 [2024-12-12 16:13:13.785200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.566 [2024-12-12 16:13:13.785277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:47.566 [2024-12-12 16:13:13.785342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.566 [2024-12-12 16:13:13.785483] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:47.566 [2024-12-12 16:13:13.785498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.566 [2024-12-12 16:13:13.785512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:47.566 [2024-12-12 16:13:13.785571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.566 [2024-12-12 16:13:13.785652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:47.566 pt1 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:47.566 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.567 "name": "raid_bdev1", 00:16:47.567 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:47.567 "strip_size_kb": 64, 00:16:47.567 "state": "configuring", 00:16:47.567 "raid_level": "raid5f", 00:16:47.567 "superblock": true, 00:16:47.567 "num_base_bdevs": 4, 00:16:47.567 "num_base_bdevs_discovered": 2, 00:16:47.567 "num_base_bdevs_operational": 3, 00:16:47.567 "base_bdevs_list": [ 00:16:47.567 { 00:16:47.567 "name": null, 00:16:47.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.567 "is_configured": false, 00:16:47.567 "data_offset": 2048, 00:16:47.567 "data_size": 63488 00:16:47.567 }, 00:16:47.567 { 00:16:47.567 "name": "pt2", 00:16:47.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.567 "is_configured": true, 00:16:47.567 "data_offset": 2048, 00:16:47.567 "data_size": 63488 00:16:47.567 }, 00:16:47.567 { 00:16:47.567 "name": "pt3", 00:16:47.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.567 "is_configured": true, 00:16:47.567 "data_offset": 2048, 00:16:47.567 "data_size": 63488 00:16:47.567 }, 00:16:47.567 { 00:16:47.567 "name": null, 00:16:47.567 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:47.567 "is_configured": false, 00:16:47.567 "data_offset": 2048, 00:16:47.567 "data_size": 63488 00:16:47.567 } 00:16:47.567 ] 00:16:47.567 }' 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.567 16:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.136 [2024-12-12 16:13:14.258002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:48.136 [2024-12-12 16:13:14.258120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.136 [2024-12-12 16:13:14.258163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:48.136 [2024-12-12 16:13:14.258194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.136 [2024-12-12 16:13:14.258647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.136 [2024-12-12 16:13:14.258708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:48.136 [2024-12-12 16:13:14.258819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:48.136 [2024-12-12 16:13:14.258870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:48.136 [2024-12-12 16:13:14.259065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:48.136 [2024-12-12 16:13:14.259105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:48.136 [2024-12-12 16:13:14.259380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:48.136 [2024-12-12 16:13:14.266307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:48.136 [2024-12-12 16:13:14.266367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:48.136 [2024-12-12 16:13:14.266678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.136 pt4 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.136 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.136 "name": "raid_bdev1", 00:16:48.136 "uuid": "81cf0071-acfb-4e44-a36a-85163b41daca", 00:16:48.136 "strip_size_kb": 64, 00:16:48.136 "state": "online", 00:16:48.136 "raid_level": "raid5f", 00:16:48.136 "superblock": true, 00:16:48.136 "num_base_bdevs": 4, 00:16:48.136 "num_base_bdevs_discovered": 3, 00:16:48.137 "num_base_bdevs_operational": 3, 00:16:48.137 "base_bdevs_list": [ 00:16:48.137 { 00:16:48.137 "name": null, 00:16:48.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.137 "is_configured": false, 00:16:48.137 "data_offset": 2048, 00:16:48.137 "data_size": 63488 00:16:48.137 }, 00:16:48.137 { 00:16:48.137 "name": "pt2", 00:16:48.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.137 "is_configured": true, 00:16:48.137 "data_offset": 2048, 00:16:48.137 "data_size": 63488 00:16:48.137 }, 00:16:48.137 { 00:16:48.137 "name": "pt3", 00:16:48.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.137 "is_configured": true, 00:16:48.137 "data_offset": 2048, 00:16:48.137 "data_size": 63488 00:16:48.137 }, 00:16:48.137 { 00:16:48.137 "name": "pt4", 00:16:48.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.137 "is_configured": true, 00:16:48.137 "data_offset": 2048, 00:16:48.137 "data_size": 63488 00:16:48.137 } 00:16:48.137 ] 00:16:48.137 }' 00:16:48.137 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.137 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 [2024-12-12 16:13:14.683936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 81cf0071-acfb-4e44-a36a-85163b41daca '!=' 81cf0071-acfb-4e44-a36a-85163b41daca ']' 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 86203 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 86203 ']' 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 86203 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.397 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86203 00:16:48.656 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.656 killing process with pid 86203 00:16:48.656 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.656 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86203' 00:16:48.656 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 86203 00:16:48.656 [2024-12-12 16:13:14.750057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.656 [2024-12-12 16:13:14.750163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.656 16:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 86203 00:16:48.656 [2024-12-12 16:13:14.750253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.656 [2024-12-12 16:13:14.750270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:48.916 [2024-12-12 16:13:15.129376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.855 16:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:49.855 00:16:49.855 real 0m8.343s 00:16:49.855 user 0m13.124s 00:16:49.855 sys 0m1.562s 00:16:49.855 ************************************ 00:16:49.855 END TEST raid5f_superblock_test 00:16:49.855 ************************************ 00:16:49.855 16:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.855 16:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.115 16:13:16 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:50.115 16:13:16 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:50.115 16:13:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:50.115 16:13:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.115 16:13:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.115 ************************************ 00:16:50.115 START TEST raid5f_rebuild_test 00:16:50.115 ************************************ 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86684 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86684 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 86684 ']' 00:16:50.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.115 16:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.115 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:50.115 Zero copy mechanism will not be used. 00:16:50.115 [2024-12-12 16:13:16.372177] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:50.115 [2024-12-12 16:13:16.372297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86684 ] 00:16:50.375 [2024-12-12 16:13:16.546395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.375 [2024-12-12 16:13:16.664049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.634 [2024-12-12 16:13:16.856344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.634 [2024-12-12 16:13:16.856401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.894 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.894 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:50.894 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.894 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:50.894 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.895 BaseBdev1_malloc 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.895 [2024-12-12 16:13:17.234556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.895 [2024-12-12 16:13:17.234616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.895 [2024-12-12 16:13:17.234638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.895 [2024-12-12 16:13:17.234648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.895 [2024-12-12 16:13:17.236695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.895 [2024-12-12 16:13:17.236737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.895 BaseBdev1 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.895 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 BaseBdev2_malloc 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 [2024-12-12 16:13:17.286652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:51.155 [2024-12-12 16:13:17.286710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.155 [2024-12-12 16:13:17.286730] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:51.155 [2024-12-12 16:13:17.286741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.155 [2024-12-12 16:13:17.288802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.155 [2024-12-12 16:13:17.288926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:51.155 BaseBdev2 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 BaseBdev3_malloc 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 [2024-12-12 16:13:17.374695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:51.155 [2024-12-12 16:13:17.374746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.155 [2024-12-12 16:13:17.374768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:51.155 [2024-12-12 16:13:17.374779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.155 [2024-12-12 16:13:17.376834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.155 [2024-12-12 16:13:17.376932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:51.155 BaseBdev3 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 BaseBdev4_malloc 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 [2024-12-12 16:13:17.428210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:51.155 [2024-12-12 16:13:17.428263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.155 [2024-12-12 16:13:17.428283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:51.155 [2024-12-12 16:13:17.428294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.155 [2024-12-12 16:13:17.430325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.155 [2024-12-12 16:13:17.430358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:51.155 BaseBdev4 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 spare_malloc 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 spare_delay 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 [2024-12-12 16:13:17.492612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:51.155 [2024-12-12 16:13:17.492912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.155 [2024-12-12 16:13:17.492961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:51.155 [2024-12-12 16:13:17.492972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.155 [2024-12-12 16:13:17.495102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.155 [2024-12-12 16:13:17.495201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:51.155 spare 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 [2024-12-12 16:13:17.504634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.414 [2024-12-12 16:13:17.506465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.414 [2024-12-12 16:13:17.506532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.414 [2024-12-12 16:13:17.506583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:51.414 [2024-12-12 16:13:17.506673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:51.414 [2024-12-12 16:13:17.506695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:51.414 [2024-12-12 16:13:17.506978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.414 [2024-12-12 16:13:17.514383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:51.414 [2024-12-12 16:13:17.514405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:51.414 [2024-12-12 16:13:17.514599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.414 "name": "raid_bdev1", 00:16:51.414 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:51.414 "strip_size_kb": 64, 00:16:51.414 "state": "online", 00:16:51.414 "raid_level": "raid5f", 00:16:51.414 "superblock": false, 00:16:51.414 "num_base_bdevs": 4, 00:16:51.414 "num_base_bdevs_discovered": 4, 00:16:51.414 "num_base_bdevs_operational": 4, 00:16:51.414 "base_bdevs_list": [ 00:16:51.414 { 00:16:51.414 "name": "BaseBdev1", 00:16:51.414 "uuid": "7063d5f8-52d7-50b6-9a6d-392e11be0aaf", 00:16:51.414 "is_configured": true, 00:16:51.414 "data_offset": 0, 00:16:51.414 "data_size": 65536 00:16:51.414 }, 00:16:51.414 { 00:16:51.414 "name": "BaseBdev2", 00:16:51.414 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:51.414 "is_configured": true, 00:16:51.414 "data_offset": 0, 00:16:51.414 "data_size": 65536 00:16:51.414 }, 00:16:51.414 { 00:16:51.414 "name": "BaseBdev3", 00:16:51.414 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:51.414 "is_configured": true, 00:16:51.414 "data_offset": 0, 00:16:51.414 "data_size": 65536 00:16:51.414 }, 00:16:51.414 { 00:16:51.414 "name": "BaseBdev4", 00:16:51.414 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:51.414 "is_configured": true, 00:16:51.414 "data_offset": 0, 00:16:51.414 "data_size": 65536 00:16:51.414 } 00:16:51.414 ] 00:16:51.414 }' 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.414 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.674 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.674 16:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:51.674 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.674 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.674 [2024-12-12 16:13:17.966467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.674 16:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.674 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:51.674 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.674 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.674 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:51.674 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.674 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:51.933 [2024-12-12 16:13:18.230083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:51.933 /dev/nbd0 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.933 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.934 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:51.934 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.934 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.934 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.193 1+0 records in 00:16:52.193 1+0 records out 00:16:52.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246929 s, 16.6 MB/s 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:52.193 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:52.453 512+0 records in 00:16:52.453 512+0 records out 00:16:52.453 100663296 bytes (101 MB, 96 MiB) copied, 0.463519 s, 217 MB/s 00:16:52.453 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:52.453 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.453 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:52.453 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:52.453 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:52.453 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.453 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:52.713 [2024-12-12 16:13:18.978072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.713 [2024-12-12 16:13:18.991834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.713 16:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.713 "name": "raid_bdev1", 00:16:52.713 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:52.713 "strip_size_kb": 64, 00:16:52.713 "state": "online", 00:16:52.713 "raid_level": "raid5f", 00:16:52.713 "superblock": false, 00:16:52.713 "num_base_bdevs": 4, 00:16:52.713 "num_base_bdevs_discovered": 3, 00:16:52.713 "num_base_bdevs_operational": 3, 00:16:52.713 "base_bdevs_list": [ 00:16:52.713 { 00:16:52.713 "name": null, 00:16:52.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.713 "is_configured": false, 00:16:52.713 "data_offset": 0, 00:16:52.713 "data_size": 65536 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "name": "BaseBdev2", 00:16:52.713 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:52.713 "is_configured": true, 00:16:52.713 "data_offset": 0, 00:16:52.713 "data_size": 65536 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "name": "BaseBdev3", 00:16:52.713 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:52.713 "is_configured": true, 00:16:52.713 "data_offset": 0, 00:16:52.713 "data_size": 65536 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "name": "BaseBdev4", 00:16:52.713 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:52.713 "is_configured": true, 00:16:52.713 "data_offset": 0, 00:16:52.713 "data_size": 65536 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 }' 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.713 16:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.282 16:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.282 16:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.282 16:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.282 [2024-12-12 16:13:19.439181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.282 [2024-12-12 16:13:19.453678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:53.282 16:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.282 16:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:53.282 [2024-12-12 16:13:19.462550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.221 "name": "raid_bdev1", 00:16:54.221 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:54.221 "strip_size_kb": 64, 00:16:54.221 "state": "online", 00:16:54.221 "raid_level": "raid5f", 00:16:54.221 "superblock": false, 00:16:54.221 "num_base_bdevs": 4, 00:16:54.221 "num_base_bdevs_discovered": 4, 00:16:54.221 "num_base_bdevs_operational": 4, 00:16:54.221 "process": { 00:16:54.221 "type": "rebuild", 00:16:54.221 "target": "spare", 00:16:54.221 "progress": { 00:16:54.221 "blocks": 19200, 00:16:54.221 "percent": 9 00:16:54.221 } 00:16:54.221 }, 00:16:54.221 "base_bdevs_list": [ 00:16:54.221 { 00:16:54.221 "name": "spare", 00:16:54.221 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:16:54.221 "is_configured": true, 00:16:54.221 "data_offset": 0, 00:16:54.221 "data_size": 65536 00:16:54.221 }, 00:16:54.221 { 00:16:54.221 "name": "BaseBdev2", 00:16:54.221 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:54.221 "is_configured": true, 00:16:54.221 "data_offset": 0, 00:16:54.221 "data_size": 65536 00:16:54.221 }, 00:16:54.221 { 00:16:54.221 "name": "BaseBdev3", 00:16:54.221 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:54.221 "is_configured": true, 00:16:54.221 "data_offset": 0, 00:16:54.221 "data_size": 65536 00:16:54.221 }, 00:16:54.221 { 00:16:54.221 "name": "BaseBdev4", 00:16:54.221 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:54.221 "is_configured": true, 00:16:54.221 "data_offset": 0, 00:16:54.221 "data_size": 65536 00:16:54.221 } 00:16:54.221 ] 00:16:54.221 }' 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.221 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.481 [2024-12-12 16:13:20.593552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.481 [2024-12-12 16:13:20.669384] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:54.481 [2024-12-12 16:13:20.669446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.481 [2024-12-12 16:13:20.669481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.481 [2024-12-12 16:13:20.669492] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.481 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.481 "name": "raid_bdev1", 00:16:54.481 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:54.481 "strip_size_kb": 64, 00:16:54.481 "state": "online", 00:16:54.481 "raid_level": "raid5f", 00:16:54.481 "superblock": false, 00:16:54.481 "num_base_bdevs": 4, 00:16:54.481 "num_base_bdevs_discovered": 3, 00:16:54.481 "num_base_bdevs_operational": 3, 00:16:54.482 "base_bdevs_list": [ 00:16:54.482 { 00:16:54.482 "name": null, 00:16:54.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.482 "is_configured": false, 00:16:54.482 "data_offset": 0, 00:16:54.482 "data_size": 65536 00:16:54.482 }, 00:16:54.482 { 00:16:54.482 "name": "BaseBdev2", 00:16:54.482 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:54.482 "is_configured": true, 00:16:54.482 "data_offset": 0, 00:16:54.482 "data_size": 65536 00:16:54.482 }, 00:16:54.482 { 00:16:54.482 "name": "BaseBdev3", 00:16:54.482 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:54.482 "is_configured": true, 00:16:54.482 "data_offset": 0, 00:16:54.482 "data_size": 65536 00:16:54.482 }, 00:16:54.482 { 00:16:54.482 "name": "BaseBdev4", 00:16:54.482 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:54.482 "is_configured": true, 00:16:54.482 "data_offset": 0, 00:16:54.482 "data_size": 65536 00:16:54.482 } 00:16:54.482 ] 00:16:54.482 }' 00:16:54.482 16:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.482 16:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.741 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.741 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.741 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.741 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.741 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.000 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.000 16:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.000 16:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.000 16:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.000 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.000 "name": "raid_bdev1", 00:16:55.000 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:55.000 "strip_size_kb": 64, 00:16:55.000 "state": "online", 00:16:55.000 "raid_level": "raid5f", 00:16:55.000 "superblock": false, 00:16:55.000 "num_base_bdevs": 4, 00:16:55.000 "num_base_bdevs_discovered": 3, 00:16:55.000 "num_base_bdevs_operational": 3, 00:16:55.000 "base_bdevs_list": [ 00:16:55.000 { 00:16:55.000 "name": null, 00:16:55.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.000 "is_configured": false, 00:16:55.000 "data_offset": 0, 00:16:55.000 "data_size": 65536 00:16:55.000 }, 00:16:55.000 { 00:16:55.000 "name": "BaseBdev2", 00:16:55.000 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:55.000 "is_configured": true, 00:16:55.000 "data_offset": 0, 00:16:55.000 "data_size": 65536 00:16:55.000 }, 00:16:55.000 { 00:16:55.000 "name": "BaseBdev3", 00:16:55.000 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:55.000 "is_configured": true, 00:16:55.000 "data_offset": 0, 00:16:55.000 "data_size": 65536 00:16:55.001 }, 00:16:55.001 { 00:16:55.001 "name": "BaseBdev4", 00:16:55.001 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:55.001 "is_configured": true, 00:16:55.001 "data_offset": 0, 00:16:55.001 "data_size": 65536 00:16:55.001 } 00:16:55.001 ] 00:16:55.001 }' 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.001 [2024-12-12 16:13:21.191760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.001 [2024-12-12 16:13:21.206358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.001 16:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:55.001 [2024-12-12 16:13:21.214789] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.938 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.938 "name": "raid_bdev1", 00:16:55.938 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:55.938 "strip_size_kb": 64, 00:16:55.938 "state": "online", 00:16:55.938 "raid_level": "raid5f", 00:16:55.938 "superblock": false, 00:16:55.938 "num_base_bdevs": 4, 00:16:55.938 "num_base_bdevs_discovered": 4, 00:16:55.938 "num_base_bdevs_operational": 4, 00:16:55.938 "process": { 00:16:55.938 "type": "rebuild", 00:16:55.938 "target": "spare", 00:16:55.938 "progress": { 00:16:55.938 "blocks": 19200, 00:16:55.938 "percent": 9 00:16:55.938 } 00:16:55.938 }, 00:16:55.938 "base_bdevs_list": [ 00:16:55.938 { 00:16:55.938 "name": "spare", 00:16:55.938 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:16:55.938 "is_configured": true, 00:16:55.938 "data_offset": 0, 00:16:55.938 "data_size": 65536 00:16:55.938 }, 00:16:55.938 { 00:16:55.938 "name": "BaseBdev2", 00:16:55.938 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:55.938 "is_configured": true, 00:16:55.938 "data_offset": 0, 00:16:55.938 "data_size": 65536 00:16:55.938 }, 00:16:55.938 { 00:16:55.938 "name": "BaseBdev3", 00:16:55.938 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:55.938 "is_configured": true, 00:16:55.938 "data_offset": 0, 00:16:55.938 "data_size": 65536 00:16:55.938 }, 00:16:55.938 { 00:16:55.938 "name": "BaseBdev4", 00:16:55.938 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:55.938 "is_configured": true, 00:16:55.939 "data_offset": 0, 00:16:55.939 "data_size": 65536 00:16:55.939 } 00:16:55.939 ] 00:16:55.939 }' 00:16:55.939 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=630 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.198 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.198 "name": "raid_bdev1", 00:16:56.199 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:56.199 "strip_size_kb": 64, 00:16:56.199 "state": "online", 00:16:56.199 "raid_level": "raid5f", 00:16:56.199 "superblock": false, 00:16:56.199 "num_base_bdevs": 4, 00:16:56.199 "num_base_bdevs_discovered": 4, 00:16:56.199 "num_base_bdevs_operational": 4, 00:16:56.199 "process": { 00:16:56.199 "type": "rebuild", 00:16:56.199 "target": "spare", 00:16:56.199 "progress": { 00:16:56.199 "blocks": 21120, 00:16:56.199 "percent": 10 00:16:56.199 } 00:16:56.199 }, 00:16:56.199 "base_bdevs_list": [ 00:16:56.199 { 00:16:56.199 "name": "spare", 00:16:56.199 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:16:56.199 "is_configured": true, 00:16:56.199 "data_offset": 0, 00:16:56.199 "data_size": 65536 00:16:56.199 }, 00:16:56.199 { 00:16:56.199 "name": "BaseBdev2", 00:16:56.199 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:56.199 "is_configured": true, 00:16:56.199 "data_offset": 0, 00:16:56.199 "data_size": 65536 00:16:56.199 }, 00:16:56.199 { 00:16:56.199 "name": "BaseBdev3", 00:16:56.199 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:56.199 "is_configured": true, 00:16:56.199 "data_offset": 0, 00:16:56.199 "data_size": 65536 00:16:56.199 }, 00:16:56.199 { 00:16:56.199 "name": "BaseBdev4", 00:16:56.199 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:56.199 "is_configured": true, 00:16:56.199 "data_offset": 0, 00:16:56.199 "data_size": 65536 00:16:56.199 } 00:16:56.199 ] 00:16:56.199 }' 00:16:56.199 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.199 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.199 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.199 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.199 16:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.577 "name": "raid_bdev1", 00:16:57.577 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:57.577 "strip_size_kb": 64, 00:16:57.577 "state": "online", 00:16:57.577 "raid_level": "raid5f", 00:16:57.577 "superblock": false, 00:16:57.577 "num_base_bdevs": 4, 00:16:57.577 "num_base_bdevs_discovered": 4, 00:16:57.577 "num_base_bdevs_operational": 4, 00:16:57.577 "process": { 00:16:57.577 "type": "rebuild", 00:16:57.577 "target": "spare", 00:16:57.577 "progress": { 00:16:57.577 "blocks": 44160, 00:16:57.577 "percent": 22 00:16:57.577 } 00:16:57.577 }, 00:16:57.577 "base_bdevs_list": [ 00:16:57.577 { 00:16:57.577 "name": "spare", 00:16:57.577 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:16:57.577 "is_configured": true, 00:16:57.577 "data_offset": 0, 00:16:57.577 "data_size": 65536 00:16:57.577 }, 00:16:57.577 { 00:16:57.577 "name": "BaseBdev2", 00:16:57.577 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:57.577 "is_configured": true, 00:16:57.577 "data_offset": 0, 00:16:57.577 "data_size": 65536 00:16:57.577 }, 00:16:57.577 { 00:16:57.577 "name": "BaseBdev3", 00:16:57.577 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:57.577 "is_configured": true, 00:16:57.577 "data_offset": 0, 00:16:57.577 "data_size": 65536 00:16:57.577 }, 00:16:57.577 { 00:16:57.577 "name": "BaseBdev4", 00:16:57.577 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:57.577 "is_configured": true, 00:16:57.577 "data_offset": 0, 00:16:57.577 "data_size": 65536 00:16:57.577 } 00:16:57.577 ] 00:16:57.577 }' 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.577 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.578 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.578 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.578 16:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.515 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.516 "name": "raid_bdev1", 00:16:58.516 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:58.516 "strip_size_kb": 64, 00:16:58.516 "state": "online", 00:16:58.516 "raid_level": "raid5f", 00:16:58.516 "superblock": false, 00:16:58.516 "num_base_bdevs": 4, 00:16:58.516 "num_base_bdevs_discovered": 4, 00:16:58.516 "num_base_bdevs_operational": 4, 00:16:58.516 "process": { 00:16:58.516 "type": "rebuild", 00:16:58.516 "target": "spare", 00:16:58.516 "progress": { 00:16:58.516 "blocks": 65280, 00:16:58.516 "percent": 33 00:16:58.516 } 00:16:58.516 }, 00:16:58.516 "base_bdevs_list": [ 00:16:58.516 { 00:16:58.516 "name": "spare", 00:16:58.516 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:16:58.516 "is_configured": true, 00:16:58.516 "data_offset": 0, 00:16:58.516 "data_size": 65536 00:16:58.516 }, 00:16:58.516 { 00:16:58.516 "name": "BaseBdev2", 00:16:58.516 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:58.516 "is_configured": true, 00:16:58.516 "data_offset": 0, 00:16:58.516 "data_size": 65536 00:16:58.516 }, 00:16:58.516 { 00:16:58.516 "name": "BaseBdev3", 00:16:58.516 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:58.516 "is_configured": true, 00:16:58.516 "data_offset": 0, 00:16:58.516 "data_size": 65536 00:16:58.516 }, 00:16:58.516 { 00:16:58.516 "name": "BaseBdev4", 00:16:58.516 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:58.516 "is_configured": true, 00:16:58.516 "data_offset": 0, 00:16:58.516 "data_size": 65536 00:16:58.516 } 00:16:58.516 ] 00:16:58.516 }' 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.516 16:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.452 16:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.711 16:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.711 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.711 "name": "raid_bdev1", 00:16:59.711 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:16:59.711 "strip_size_kb": 64, 00:16:59.711 "state": "online", 00:16:59.711 "raid_level": "raid5f", 00:16:59.711 "superblock": false, 00:16:59.711 "num_base_bdevs": 4, 00:16:59.711 "num_base_bdevs_discovered": 4, 00:16:59.711 "num_base_bdevs_operational": 4, 00:16:59.711 "process": { 00:16:59.711 "type": "rebuild", 00:16:59.711 "target": "spare", 00:16:59.711 "progress": { 00:16:59.711 "blocks": 86400, 00:16:59.711 "percent": 43 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 "base_bdevs_list": [ 00:16:59.712 { 00:16:59.712 "name": "spare", 00:16:59.712 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:16:59.712 "is_configured": true, 00:16:59.712 "data_offset": 0, 00:16:59.712 "data_size": 65536 00:16:59.712 }, 00:16:59.712 { 00:16:59.712 "name": "BaseBdev2", 00:16:59.712 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:16:59.712 "is_configured": true, 00:16:59.712 "data_offset": 0, 00:16:59.712 "data_size": 65536 00:16:59.712 }, 00:16:59.712 { 00:16:59.712 "name": "BaseBdev3", 00:16:59.712 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:16:59.712 "is_configured": true, 00:16:59.712 "data_offset": 0, 00:16:59.712 "data_size": 65536 00:16:59.712 }, 00:16:59.712 { 00:16:59.712 "name": "BaseBdev4", 00:16:59.712 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:16:59.712 "is_configured": true, 00:16:59.712 "data_offset": 0, 00:16:59.712 "data_size": 65536 00:16:59.712 } 00:16:59.712 ] 00:16:59.712 }' 00:16:59.712 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.712 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.712 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.712 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.712 16:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.649 "name": "raid_bdev1", 00:17:00.649 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:00.649 "strip_size_kb": 64, 00:17:00.649 "state": "online", 00:17:00.649 "raid_level": "raid5f", 00:17:00.649 "superblock": false, 00:17:00.649 "num_base_bdevs": 4, 00:17:00.649 "num_base_bdevs_discovered": 4, 00:17:00.649 "num_base_bdevs_operational": 4, 00:17:00.649 "process": { 00:17:00.649 "type": "rebuild", 00:17:00.649 "target": "spare", 00:17:00.649 "progress": { 00:17:00.649 "blocks": 107520, 00:17:00.649 "percent": 54 00:17:00.649 } 00:17:00.649 }, 00:17:00.649 "base_bdevs_list": [ 00:17:00.649 { 00:17:00.649 "name": "spare", 00:17:00.649 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:00.649 "is_configured": true, 00:17:00.649 "data_offset": 0, 00:17:00.649 "data_size": 65536 00:17:00.649 }, 00:17:00.649 { 00:17:00.649 "name": "BaseBdev2", 00:17:00.649 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:00.649 "is_configured": true, 00:17:00.649 "data_offset": 0, 00:17:00.649 "data_size": 65536 00:17:00.649 }, 00:17:00.649 { 00:17:00.649 "name": "BaseBdev3", 00:17:00.649 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:00.649 "is_configured": true, 00:17:00.649 "data_offset": 0, 00:17:00.649 "data_size": 65536 00:17:00.649 }, 00:17:00.649 { 00:17:00.649 "name": "BaseBdev4", 00:17:00.649 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:00.649 "is_configured": true, 00:17:00.649 "data_offset": 0, 00:17:00.649 "data_size": 65536 00:17:00.649 } 00:17:00.649 ] 00:17:00.649 }' 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.649 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.909 16:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.909 16:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.909 16:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.845 "name": "raid_bdev1", 00:17:01.845 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:01.845 "strip_size_kb": 64, 00:17:01.845 "state": "online", 00:17:01.845 "raid_level": "raid5f", 00:17:01.845 "superblock": false, 00:17:01.845 "num_base_bdevs": 4, 00:17:01.845 "num_base_bdevs_discovered": 4, 00:17:01.845 "num_base_bdevs_operational": 4, 00:17:01.845 "process": { 00:17:01.845 "type": "rebuild", 00:17:01.845 "target": "spare", 00:17:01.845 "progress": { 00:17:01.845 "blocks": 130560, 00:17:01.845 "percent": 66 00:17:01.845 } 00:17:01.845 }, 00:17:01.845 "base_bdevs_list": [ 00:17:01.845 { 00:17:01.845 "name": "spare", 00:17:01.845 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:01.845 "is_configured": true, 00:17:01.845 "data_offset": 0, 00:17:01.845 "data_size": 65536 00:17:01.845 }, 00:17:01.845 { 00:17:01.845 "name": "BaseBdev2", 00:17:01.845 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:01.845 "is_configured": true, 00:17:01.845 "data_offset": 0, 00:17:01.845 "data_size": 65536 00:17:01.845 }, 00:17:01.845 { 00:17:01.845 "name": "BaseBdev3", 00:17:01.845 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:01.845 "is_configured": true, 00:17:01.845 "data_offset": 0, 00:17:01.845 "data_size": 65536 00:17:01.845 }, 00:17:01.845 { 00:17:01.845 "name": "BaseBdev4", 00:17:01.845 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:01.845 "is_configured": true, 00:17:01.845 "data_offset": 0, 00:17:01.845 "data_size": 65536 00:17:01.845 } 00:17:01.845 ] 00:17:01.845 }' 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.845 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.105 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.105 16:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.041 "name": "raid_bdev1", 00:17:03.041 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:03.041 "strip_size_kb": 64, 00:17:03.041 "state": "online", 00:17:03.041 "raid_level": "raid5f", 00:17:03.041 "superblock": false, 00:17:03.041 "num_base_bdevs": 4, 00:17:03.041 "num_base_bdevs_discovered": 4, 00:17:03.041 "num_base_bdevs_operational": 4, 00:17:03.041 "process": { 00:17:03.041 "type": "rebuild", 00:17:03.041 "target": "spare", 00:17:03.041 "progress": { 00:17:03.041 "blocks": 151680, 00:17:03.041 "percent": 77 00:17:03.041 } 00:17:03.041 }, 00:17:03.041 "base_bdevs_list": [ 00:17:03.041 { 00:17:03.041 "name": "spare", 00:17:03.041 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:03.041 "is_configured": true, 00:17:03.041 "data_offset": 0, 00:17:03.041 "data_size": 65536 00:17:03.041 }, 00:17:03.041 { 00:17:03.041 "name": "BaseBdev2", 00:17:03.041 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:03.041 "is_configured": true, 00:17:03.041 "data_offset": 0, 00:17:03.041 "data_size": 65536 00:17:03.041 }, 00:17:03.041 { 00:17:03.041 "name": "BaseBdev3", 00:17:03.041 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:03.041 "is_configured": true, 00:17:03.041 "data_offset": 0, 00:17:03.041 "data_size": 65536 00:17:03.041 }, 00:17:03.041 { 00:17:03.041 "name": "BaseBdev4", 00:17:03.041 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:03.041 "is_configured": true, 00:17:03.041 "data_offset": 0, 00:17:03.041 "data_size": 65536 00:17:03.041 } 00:17:03.041 ] 00:17:03.041 }' 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.041 16:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.420 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.420 "name": "raid_bdev1", 00:17:04.420 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:04.420 "strip_size_kb": 64, 00:17:04.420 "state": "online", 00:17:04.420 "raid_level": "raid5f", 00:17:04.420 "superblock": false, 00:17:04.420 "num_base_bdevs": 4, 00:17:04.420 "num_base_bdevs_discovered": 4, 00:17:04.420 "num_base_bdevs_operational": 4, 00:17:04.420 "process": { 00:17:04.420 "type": "rebuild", 00:17:04.420 "target": "spare", 00:17:04.420 "progress": { 00:17:04.420 "blocks": 174720, 00:17:04.420 "percent": 88 00:17:04.420 } 00:17:04.420 }, 00:17:04.420 "base_bdevs_list": [ 00:17:04.420 { 00:17:04.420 "name": "spare", 00:17:04.420 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:04.420 "is_configured": true, 00:17:04.420 "data_offset": 0, 00:17:04.420 "data_size": 65536 00:17:04.420 }, 00:17:04.420 { 00:17:04.420 "name": "BaseBdev2", 00:17:04.420 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:04.420 "is_configured": true, 00:17:04.420 "data_offset": 0, 00:17:04.420 "data_size": 65536 00:17:04.420 }, 00:17:04.420 { 00:17:04.421 "name": "BaseBdev3", 00:17:04.421 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:04.421 "is_configured": true, 00:17:04.421 "data_offset": 0, 00:17:04.421 "data_size": 65536 00:17:04.421 }, 00:17:04.421 { 00:17:04.421 "name": "BaseBdev4", 00:17:04.421 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:04.421 "is_configured": true, 00:17:04.421 "data_offset": 0, 00:17:04.421 "data_size": 65536 00:17:04.421 } 00:17:04.421 ] 00:17:04.421 }' 00:17:04.421 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.421 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.421 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.421 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.421 16:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.361 "name": "raid_bdev1", 00:17:05.361 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:05.361 "strip_size_kb": 64, 00:17:05.361 "state": "online", 00:17:05.361 "raid_level": "raid5f", 00:17:05.361 "superblock": false, 00:17:05.361 "num_base_bdevs": 4, 00:17:05.361 "num_base_bdevs_discovered": 4, 00:17:05.361 "num_base_bdevs_operational": 4, 00:17:05.361 "process": { 00:17:05.361 "type": "rebuild", 00:17:05.361 "target": "spare", 00:17:05.361 "progress": { 00:17:05.361 "blocks": 195840, 00:17:05.361 "percent": 99 00:17:05.361 } 00:17:05.361 }, 00:17:05.361 "base_bdevs_list": [ 00:17:05.361 { 00:17:05.361 "name": "spare", 00:17:05.361 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:05.361 "is_configured": true, 00:17:05.361 "data_offset": 0, 00:17:05.361 "data_size": 65536 00:17:05.361 }, 00:17:05.361 { 00:17:05.361 "name": "BaseBdev2", 00:17:05.361 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:05.361 "is_configured": true, 00:17:05.361 "data_offset": 0, 00:17:05.361 "data_size": 65536 00:17:05.361 }, 00:17:05.361 { 00:17:05.361 "name": "BaseBdev3", 00:17:05.361 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:05.361 "is_configured": true, 00:17:05.361 "data_offset": 0, 00:17:05.361 "data_size": 65536 00:17:05.361 }, 00:17:05.361 { 00:17:05.361 "name": "BaseBdev4", 00:17:05.361 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:05.361 "is_configured": true, 00:17:05.361 "data_offset": 0, 00:17:05.361 "data_size": 65536 00:17:05.361 } 00:17:05.361 ] 00:17:05.361 }' 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.361 [2024-12-12 16:13:31.572157] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:05.361 [2024-12-12 16:13:31.572230] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:05.361 [2024-12-12 16:13:31.572273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.361 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.362 16:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.300 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.300 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.300 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.300 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.300 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.300 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.300 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.559 "name": "raid_bdev1", 00:17:06.559 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:06.559 "strip_size_kb": 64, 00:17:06.559 "state": "online", 00:17:06.559 "raid_level": "raid5f", 00:17:06.559 "superblock": false, 00:17:06.559 "num_base_bdevs": 4, 00:17:06.559 "num_base_bdevs_discovered": 4, 00:17:06.559 "num_base_bdevs_operational": 4, 00:17:06.559 "base_bdevs_list": [ 00:17:06.559 { 00:17:06.559 "name": "spare", 00:17:06.559 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 }, 00:17:06.559 { 00:17:06.559 "name": "BaseBdev2", 00:17:06.559 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 }, 00:17:06.559 { 00:17:06.559 "name": "BaseBdev3", 00:17:06.559 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 }, 00:17:06.559 { 00:17:06.559 "name": "BaseBdev4", 00:17:06.559 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 } 00:17:06.559 ] 00:17:06.559 }' 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.559 "name": "raid_bdev1", 00:17:06.559 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:06.559 "strip_size_kb": 64, 00:17:06.559 "state": "online", 00:17:06.559 "raid_level": "raid5f", 00:17:06.559 "superblock": false, 00:17:06.559 "num_base_bdevs": 4, 00:17:06.559 "num_base_bdevs_discovered": 4, 00:17:06.559 "num_base_bdevs_operational": 4, 00:17:06.559 "base_bdevs_list": [ 00:17:06.559 { 00:17:06.559 "name": "spare", 00:17:06.559 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 }, 00:17:06.559 { 00:17:06.559 "name": "BaseBdev2", 00:17:06.559 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 }, 00:17:06.559 { 00:17:06.559 "name": "BaseBdev3", 00:17:06.559 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 }, 00:17:06.559 { 00:17:06.559 "name": "BaseBdev4", 00:17:06.559 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:06.559 "is_configured": true, 00:17:06.559 "data_offset": 0, 00:17:06.559 "data_size": 65536 00:17:06.559 } 00:17:06.559 ] 00:17:06.559 }' 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.559 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.819 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.819 "name": "raid_bdev1", 00:17:06.819 "uuid": "61e49867-96f9-48c1-9f2b-b1105d719514", 00:17:06.819 "strip_size_kb": 64, 00:17:06.819 "state": "online", 00:17:06.819 "raid_level": "raid5f", 00:17:06.819 "superblock": false, 00:17:06.819 "num_base_bdevs": 4, 00:17:06.819 "num_base_bdevs_discovered": 4, 00:17:06.819 "num_base_bdevs_operational": 4, 00:17:06.819 "base_bdevs_list": [ 00:17:06.819 { 00:17:06.819 "name": "spare", 00:17:06.819 "uuid": "e10b6076-a3c9-5ff2-825f-b7eacae71383", 00:17:06.819 "is_configured": true, 00:17:06.819 "data_offset": 0, 00:17:06.819 "data_size": 65536 00:17:06.819 }, 00:17:06.819 { 00:17:06.819 "name": "BaseBdev2", 00:17:06.819 "uuid": "389c413a-9e87-508d-9c3e-f1ba7f84ecc3", 00:17:06.819 "is_configured": true, 00:17:06.819 "data_offset": 0, 00:17:06.820 "data_size": 65536 00:17:06.820 }, 00:17:06.820 { 00:17:06.820 "name": "BaseBdev3", 00:17:06.820 "uuid": "a3e36207-9d8b-59c4-aa87-20aaddb69913", 00:17:06.820 "is_configured": true, 00:17:06.820 "data_offset": 0, 00:17:06.820 "data_size": 65536 00:17:06.820 }, 00:17:06.820 { 00:17:06.820 "name": "BaseBdev4", 00:17:06.820 "uuid": "f65896e5-71c3-5f82-be97-f497ccced304", 00:17:06.820 "is_configured": true, 00:17:06.820 "data_offset": 0, 00:17:06.820 "data_size": 65536 00:17:06.820 } 00:17:06.820 ] 00:17:06.820 }' 00:17:06.820 16:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.820 16:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.079 [2024-12-12 16:13:33.356378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.079 [2024-12-12 16:13:33.356415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.079 [2024-12-12 16:13:33.356506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.079 [2024-12-12 16:13:33.356604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.079 [2024-12-12 16:13:33.356622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.079 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:07.338 /dev/nbd0 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.338 1+0 records in 00:17:07.338 1+0 records out 00:17:07.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035143 s, 11.7 MB/s 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.338 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:07.598 /dev/nbd1 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.598 1+0 records in 00:17:07.598 1+0 records out 00:17:07.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291661 s, 14.0 MB/s 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.598 16:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:07.857 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:07.857 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.857 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.857 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.857 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:07.857 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.857 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.116 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86684 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 86684 ']' 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 86684 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86684 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.376 killing process with pid 86684 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86684' 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 86684 00:17:08.376 Received shutdown signal, test time was about 60.000000 seconds 00:17:08.376 00:17:08.376 Latency(us) 00:17:08.376 [2024-12-12T16:13:34.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.376 [2024-12-12T16:13:34.728Z] =================================================================================================================== 00:17:08.376 [2024-12-12T16:13:34.728Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:08.376 [2024-12-12 16:13:34.569764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.376 16:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 86684 00:17:08.944 [2024-12-12 16:13:35.086284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:10.322 00:17:10.322 real 0m20.022s 00:17:10.322 user 0m23.837s 00:17:10.322 sys 0m2.185s 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 ************************************ 00:17:10.322 END TEST raid5f_rebuild_test 00:17:10.322 ************************************ 00:17:10.322 16:13:36 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:10.322 16:13:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:10.322 16:13:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.322 16:13:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 ************************************ 00:17:10.322 START TEST raid5f_rebuild_test_sb 00:17:10.322 ************************************ 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:10.322 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=87200 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 87200 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 87200 ']' 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.323 16:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.323 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:10.323 Zero copy mechanism will not be used. 00:17:10.323 [2024-12-12 16:13:36.463372] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:10.323 [2024-12-12 16:13:36.463487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87200 ] 00:17:10.323 [2024-12-12 16:13:36.637829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.581 [2024-12-12 16:13:36.769912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.840 [2024-12-12 16:13:36.986986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.840 [2024-12-12 16:13:36.987039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.099 BaseBdev1_malloc 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.099 [2024-12-12 16:13:37.343628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.099 [2024-12-12 16:13:37.343709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.099 [2024-12-12 16:13:37.343736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.099 [2024-12-12 16:13:37.343750] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.099 [2024-12-12 16:13:37.346066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.099 [2024-12-12 16:13:37.346111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.099 BaseBdev1 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.099 BaseBdev2_malloc 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.099 [2024-12-12 16:13:37.403304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:11.099 [2024-12-12 16:13:37.403372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.099 [2024-12-12 16:13:37.403393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:11.099 [2024-12-12 16:13:37.403406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.099 [2024-12-12 16:13:37.405725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.099 [2024-12-12 16:13:37.405769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.099 BaseBdev2 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.099 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.359 BaseBdev3_malloc 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.359 [2024-12-12 16:13:37.495326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:11.359 [2024-12-12 16:13:37.495383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.359 [2024-12-12 16:13:37.495409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:11.359 [2024-12-12 16:13:37.495422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.359 [2024-12-12 16:13:37.497686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.359 [2024-12-12 16:13:37.497731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:11.359 BaseBdev3 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.359 BaseBdev4_malloc 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.359 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.359 [2024-12-12 16:13:37.551502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:11.359 [2024-12-12 16:13:37.551565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.359 [2024-12-12 16:13:37.551588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:11.359 [2024-12-12 16:13:37.551608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.359 [2024-12-12 16:13:37.553810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.360 [2024-12-12 16:13:37.553854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:11.360 BaseBdev4 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.360 spare_malloc 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.360 spare_delay 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.360 [2024-12-12 16:13:37.619635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.360 [2024-12-12 16:13:37.619687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.360 [2024-12-12 16:13:37.619707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:11.360 [2024-12-12 16:13:37.619720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.360 [2024-12-12 16:13:37.621946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.360 [2024-12-12 16:13:37.621988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.360 spare 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.360 [2024-12-12 16:13:37.631678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.360 [2024-12-12 16:13:37.633661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.360 [2024-12-12 16:13:37.633732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.360 [2024-12-12 16:13:37.633790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.360 [2024-12-12 16:13:37.634011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:11.360 [2024-12-12 16:13:37.634035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:11.360 [2024-12-12 16:13:37.634285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:11.360 [2024-12-12 16:13:37.640656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:11.360 [2024-12-12 16:13:37.640683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:11.360 [2024-12-12 16:13:37.640866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.360 "name": "raid_bdev1", 00:17:11.360 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:11.360 "strip_size_kb": 64, 00:17:11.360 "state": "online", 00:17:11.360 "raid_level": "raid5f", 00:17:11.360 "superblock": true, 00:17:11.360 "num_base_bdevs": 4, 00:17:11.360 "num_base_bdevs_discovered": 4, 00:17:11.360 "num_base_bdevs_operational": 4, 00:17:11.360 "base_bdevs_list": [ 00:17:11.360 { 00:17:11.360 "name": "BaseBdev1", 00:17:11.360 "uuid": "63f6cf72-b16b-5d58-8511-79c286a4c370", 00:17:11.360 "is_configured": true, 00:17:11.360 "data_offset": 2048, 00:17:11.360 "data_size": 63488 00:17:11.360 }, 00:17:11.360 { 00:17:11.360 "name": "BaseBdev2", 00:17:11.360 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:11.360 "is_configured": true, 00:17:11.360 "data_offset": 2048, 00:17:11.360 "data_size": 63488 00:17:11.360 }, 00:17:11.360 { 00:17:11.360 "name": "BaseBdev3", 00:17:11.360 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:11.360 "is_configured": true, 00:17:11.360 "data_offset": 2048, 00:17:11.360 "data_size": 63488 00:17:11.360 }, 00:17:11.360 { 00:17:11.360 "name": "BaseBdev4", 00:17:11.360 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:11.360 "is_configured": true, 00:17:11.360 "data_offset": 2048, 00:17:11.360 "data_size": 63488 00:17:11.360 } 00:17:11.360 ] 00:17:11.360 }' 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.360 16:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:11.929 [2024-12-12 16:13:38.080827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.929 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.930 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:12.189 [2024-12-12 16:13:38.368178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:12.189 /dev/nbd0 00:17:12.189 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.189 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.189 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.189 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:12.189 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.189 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.189 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.190 1+0 records in 00:17:12.190 1+0 records out 00:17:12.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409875 s, 10.0 MB/s 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:12.190 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:12.759 496+0 records in 00:17:12.759 496+0 records out 00:17:12.759 97517568 bytes (98 MB, 93 MiB) copied, 0.443708 s, 220 MB/s 00:17:12.759 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.759 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.759 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.759 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.759 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:12.759 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.759 16:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.759 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.759 [2024-12-12 16:13:39.102214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.759 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.759 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.759 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.759 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.759 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.018 [2024-12-12 16:13:39.116958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.018 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.018 "name": "raid_bdev1", 00:17:13.018 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:13.018 "strip_size_kb": 64, 00:17:13.018 "state": "online", 00:17:13.018 "raid_level": "raid5f", 00:17:13.018 "superblock": true, 00:17:13.018 "num_base_bdevs": 4, 00:17:13.018 "num_base_bdevs_discovered": 3, 00:17:13.018 "num_base_bdevs_operational": 3, 00:17:13.018 "base_bdevs_list": [ 00:17:13.018 { 00:17:13.018 "name": null, 00:17:13.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.018 "is_configured": false, 00:17:13.018 "data_offset": 0, 00:17:13.018 "data_size": 63488 00:17:13.018 }, 00:17:13.018 { 00:17:13.018 "name": "BaseBdev2", 00:17:13.018 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:13.018 "is_configured": true, 00:17:13.018 "data_offset": 2048, 00:17:13.018 "data_size": 63488 00:17:13.018 }, 00:17:13.018 { 00:17:13.018 "name": "BaseBdev3", 00:17:13.018 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:13.018 "is_configured": true, 00:17:13.018 "data_offset": 2048, 00:17:13.018 "data_size": 63488 00:17:13.018 }, 00:17:13.018 { 00:17:13.018 "name": "BaseBdev4", 00:17:13.018 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:13.018 "is_configured": true, 00:17:13.018 "data_offset": 2048, 00:17:13.018 "data_size": 63488 00:17:13.019 } 00:17:13.019 ] 00:17:13.019 }' 00:17:13.019 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.019 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.278 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.278 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.278 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.278 [2024-12-12 16:13:39.548284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.278 [2024-12-12 16:13:39.565382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:13.278 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.278 16:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.278 [2024-12-12 16:13:39.576195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.658 "name": "raid_bdev1", 00:17:14.658 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:14.658 "strip_size_kb": 64, 00:17:14.658 "state": "online", 00:17:14.658 "raid_level": "raid5f", 00:17:14.658 "superblock": true, 00:17:14.658 "num_base_bdevs": 4, 00:17:14.658 "num_base_bdevs_discovered": 4, 00:17:14.658 "num_base_bdevs_operational": 4, 00:17:14.658 "process": { 00:17:14.658 "type": "rebuild", 00:17:14.658 "target": "spare", 00:17:14.658 "progress": { 00:17:14.658 "blocks": 17280, 00:17:14.658 "percent": 9 00:17:14.658 } 00:17:14.658 }, 00:17:14.658 "base_bdevs_list": [ 00:17:14.658 { 00:17:14.658 "name": "spare", 00:17:14.658 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:14.658 "is_configured": true, 00:17:14.658 "data_offset": 2048, 00:17:14.658 "data_size": 63488 00:17:14.658 }, 00:17:14.658 { 00:17:14.658 "name": "BaseBdev2", 00:17:14.658 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:14.658 "is_configured": true, 00:17:14.658 "data_offset": 2048, 00:17:14.658 "data_size": 63488 00:17:14.658 }, 00:17:14.658 { 00:17:14.658 "name": "BaseBdev3", 00:17:14.658 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:14.658 "is_configured": true, 00:17:14.658 "data_offset": 2048, 00:17:14.658 "data_size": 63488 00:17:14.658 }, 00:17:14.658 { 00:17:14.658 "name": "BaseBdev4", 00:17:14.658 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:14.658 "is_configured": true, 00:17:14.658 "data_offset": 2048, 00:17:14.658 "data_size": 63488 00:17:14.658 } 00:17:14.658 ] 00:17:14.658 }' 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.658 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.658 [2024-12-12 16:13:40.731370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.658 [2024-12-12 16:13:40.785372] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:14.658 [2024-12-12 16:13:40.785468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.658 [2024-12-12 16:13:40.785490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.659 [2024-12-12 16:13:40.785504] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.659 "name": "raid_bdev1", 00:17:14.659 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:14.659 "strip_size_kb": 64, 00:17:14.659 "state": "online", 00:17:14.659 "raid_level": "raid5f", 00:17:14.659 "superblock": true, 00:17:14.659 "num_base_bdevs": 4, 00:17:14.659 "num_base_bdevs_discovered": 3, 00:17:14.659 "num_base_bdevs_operational": 3, 00:17:14.659 "base_bdevs_list": [ 00:17:14.659 { 00:17:14.659 "name": null, 00:17:14.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.659 "is_configured": false, 00:17:14.659 "data_offset": 0, 00:17:14.659 "data_size": 63488 00:17:14.659 }, 00:17:14.659 { 00:17:14.659 "name": "BaseBdev2", 00:17:14.659 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:14.659 "is_configured": true, 00:17:14.659 "data_offset": 2048, 00:17:14.659 "data_size": 63488 00:17:14.659 }, 00:17:14.659 { 00:17:14.659 "name": "BaseBdev3", 00:17:14.659 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:14.659 "is_configured": true, 00:17:14.659 "data_offset": 2048, 00:17:14.659 "data_size": 63488 00:17:14.659 }, 00:17:14.659 { 00:17:14.659 "name": "BaseBdev4", 00:17:14.659 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:14.659 "is_configured": true, 00:17:14.659 "data_offset": 2048, 00:17:14.659 "data_size": 63488 00:17:14.659 } 00:17:14.659 ] 00:17:14.659 }' 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.659 16:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.918 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.178 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.178 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.178 "name": "raid_bdev1", 00:17:15.178 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:15.178 "strip_size_kb": 64, 00:17:15.178 "state": "online", 00:17:15.178 "raid_level": "raid5f", 00:17:15.178 "superblock": true, 00:17:15.178 "num_base_bdevs": 4, 00:17:15.178 "num_base_bdevs_discovered": 3, 00:17:15.178 "num_base_bdevs_operational": 3, 00:17:15.178 "base_bdevs_list": [ 00:17:15.178 { 00:17:15.178 "name": null, 00:17:15.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.178 "is_configured": false, 00:17:15.178 "data_offset": 0, 00:17:15.178 "data_size": 63488 00:17:15.178 }, 00:17:15.178 { 00:17:15.178 "name": "BaseBdev2", 00:17:15.178 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:15.178 "is_configured": true, 00:17:15.178 "data_offset": 2048, 00:17:15.178 "data_size": 63488 00:17:15.178 }, 00:17:15.178 { 00:17:15.178 "name": "BaseBdev3", 00:17:15.178 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:15.178 "is_configured": true, 00:17:15.178 "data_offset": 2048, 00:17:15.178 "data_size": 63488 00:17:15.178 }, 00:17:15.178 { 00:17:15.178 "name": "BaseBdev4", 00:17:15.178 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:15.178 "is_configured": true, 00:17:15.178 "data_offset": 2048, 00:17:15.178 "data_size": 63488 00:17:15.178 } 00:17:15.179 ] 00:17:15.179 }' 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.179 [2024-12-12 16:13:41.348052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.179 [2024-12-12 16:13:41.363547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.179 16:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:15.179 [2024-12-12 16:13:41.373098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.116 "name": "raid_bdev1", 00:17:16.116 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:16.116 "strip_size_kb": 64, 00:17:16.116 "state": "online", 00:17:16.116 "raid_level": "raid5f", 00:17:16.116 "superblock": true, 00:17:16.116 "num_base_bdevs": 4, 00:17:16.116 "num_base_bdevs_discovered": 4, 00:17:16.116 "num_base_bdevs_operational": 4, 00:17:16.116 "process": { 00:17:16.116 "type": "rebuild", 00:17:16.116 "target": "spare", 00:17:16.116 "progress": { 00:17:16.116 "blocks": 19200, 00:17:16.116 "percent": 10 00:17:16.116 } 00:17:16.116 }, 00:17:16.116 "base_bdevs_list": [ 00:17:16.116 { 00:17:16.116 "name": "spare", 00:17:16.116 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:16.116 "is_configured": true, 00:17:16.116 "data_offset": 2048, 00:17:16.116 "data_size": 63488 00:17:16.116 }, 00:17:16.116 { 00:17:16.116 "name": "BaseBdev2", 00:17:16.116 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:16.116 "is_configured": true, 00:17:16.116 "data_offset": 2048, 00:17:16.116 "data_size": 63488 00:17:16.116 }, 00:17:16.116 { 00:17:16.116 "name": "BaseBdev3", 00:17:16.116 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:16.116 "is_configured": true, 00:17:16.116 "data_offset": 2048, 00:17:16.116 "data_size": 63488 00:17:16.116 }, 00:17:16.116 { 00:17:16.116 "name": "BaseBdev4", 00:17:16.116 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:16.116 "is_configured": true, 00:17:16.116 "data_offset": 2048, 00:17:16.116 "data_size": 63488 00:17:16.116 } 00:17:16.116 ] 00:17:16.116 }' 00:17:16.116 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:16.376 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=650 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.376 "name": "raid_bdev1", 00:17:16.376 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:16.376 "strip_size_kb": 64, 00:17:16.376 "state": "online", 00:17:16.376 "raid_level": "raid5f", 00:17:16.376 "superblock": true, 00:17:16.376 "num_base_bdevs": 4, 00:17:16.376 "num_base_bdevs_discovered": 4, 00:17:16.376 "num_base_bdevs_operational": 4, 00:17:16.376 "process": { 00:17:16.376 "type": "rebuild", 00:17:16.376 "target": "spare", 00:17:16.376 "progress": { 00:17:16.376 "blocks": 21120, 00:17:16.376 "percent": 11 00:17:16.376 } 00:17:16.376 }, 00:17:16.376 "base_bdevs_list": [ 00:17:16.376 { 00:17:16.376 "name": "spare", 00:17:16.376 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:16.376 "is_configured": true, 00:17:16.376 "data_offset": 2048, 00:17:16.376 "data_size": 63488 00:17:16.376 }, 00:17:16.376 { 00:17:16.376 "name": "BaseBdev2", 00:17:16.376 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:16.376 "is_configured": true, 00:17:16.376 "data_offset": 2048, 00:17:16.376 "data_size": 63488 00:17:16.376 }, 00:17:16.376 { 00:17:16.376 "name": "BaseBdev3", 00:17:16.376 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:16.376 "is_configured": true, 00:17:16.376 "data_offset": 2048, 00:17:16.376 "data_size": 63488 00:17:16.376 }, 00:17:16.376 { 00:17:16.376 "name": "BaseBdev4", 00:17:16.376 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:16.376 "is_configured": true, 00:17:16.376 "data_offset": 2048, 00:17:16.376 "data_size": 63488 00:17:16.376 } 00:17:16.376 ] 00:17:16.376 }' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.376 16:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.762 "name": "raid_bdev1", 00:17:17.762 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:17.762 "strip_size_kb": 64, 00:17:17.762 "state": "online", 00:17:17.762 "raid_level": "raid5f", 00:17:17.762 "superblock": true, 00:17:17.762 "num_base_bdevs": 4, 00:17:17.762 "num_base_bdevs_discovered": 4, 00:17:17.762 "num_base_bdevs_operational": 4, 00:17:17.762 "process": { 00:17:17.762 "type": "rebuild", 00:17:17.762 "target": "spare", 00:17:17.762 "progress": { 00:17:17.762 "blocks": 42240, 00:17:17.762 "percent": 22 00:17:17.762 } 00:17:17.762 }, 00:17:17.762 "base_bdevs_list": [ 00:17:17.762 { 00:17:17.762 "name": "spare", 00:17:17.762 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:17.762 "is_configured": true, 00:17:17.762 "data_offset": 2048, 00:17:17.762 "data_size": 63488 00:17:17.762 }, 00:17:17.762 { 00:17:17.762 "name": "BaseBdev2", 00:17:17.762 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:17.762 "is_configured": true, 00:17:17.762 "data_offset": 2048, 00:17:17.762 "data_size": 63488 00:17:17.762 }, 00:17:17.762 { 00:17:17.762 "name": "BaseBdev3", 00:17:17.762 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:17.762 "is_configured": true, 00:17:17.762 "data_offset": 2048, 00:17:17.762 "data_size": 63488 00:17:17.762 }, 00:17:17.762 { 00:17:17.762 "name": "BaseBdev4", 00:17:17.762 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:17.762 "is_configured": true, 00:17:17.762 "data_offset": 2048, 00:17:17.762 "data_size": 63488 00:17:17.762 } 00:17:17.762 ] 00:17:17.762 }' 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.762 16:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.711 "name": "raid_bdev1", 00:17:18.711 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:18.711 "strip_size_kb": 64, 00:17:18.711 "state": "online", 00:17:18.711 "raid_level": "raid5f", 00:17:18.711 "superblock": true, 00:17:18.711 "num_base_bdevs": 4, 00:17:18.711 "num_base_bdevs_discovered": 4, 00:17:18.711 "num_base_bdevs_operational": 4, 00:17:18.711 "process": { 00:17:18.711 "type": "rebuild", 00:17:18.711 "target": "spare", 00:17:18.711 "progress": { 00:17:18.711 "blocks": 65280, 00:17:18.711 "percent": 34 00:17:18.711 } 00:17:18.711 }, 00:17:18.711 "base_bdevs_list": [ 00:17:18.711 { 00:17:18.711 "name": "spare", 00:17:18.711 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:18.711 "is_configured": true, 00:17:18.711 "data_offset": 2048, 00:17:18.711 "data_size": 63488 00:17:18.711 }, 00:17:18.711 { 00:17:18.711 "name": "BaseBdev2", 00:17:18.711 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:18.711 "is_configured": true, 00:17:18.711 "data_offset": 2048, 00:17:18.711 "data_size": 63488 00:17:18.711 }, 00:17:18.711 { 00:17:18.711 "name": "BaseBdev3", 00:17:18.711 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:18.711 "is_configured": true, 00:17:18.711 "data_offset": 2048, 00:17:18.711 "data_size": 63488 00:17:18.711 }, 00:17:18.711 { 00:17:18.711 "name": "BaseBdev4", 00:17:18.711 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:18.711 "is_configured": true, 00:17:18.711 "data_offset": 2048, 00:17:18.711 "data_size": 63488 00:17:18.711 } 00:17:18.711 ] 00:17:18.711 }' 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.711 16:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.648 16:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.907 16:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.907 16:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.907 "name": "raid_bdev1", 00:17:19.907 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:19.907 "strip_size_kb": 64, 00:17:19.907 "state": "online", 00:17:19.907 "raid_level": "raid5f", 00:17:19.907 "superblock": true, 00:17:19.907 "num_base_bdevs": 4, 00:17:19.907 "num_base_bdevs_discovered": 4, 00:17:19.907 "num_base_bdevs_operational": 4, 00:17:19.907 "process": { 00:17:19.907 "type": "rebuild", 00:17:19.907 "target": "spare", 00:17:19.907 "progress": { 00:17:19.907 "blocks": 86400, 00:17:19.907 "percent": 45 00:17:19.907 } 00:17:19.907 }, 00:17:19.907 "base_bdevs_list": [ 00:17:19.907 { 00:17:19.907 "name": "spare", 00:17:19.907 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:19.907 "is_configured": true, 00:17:19.907 "data_offset": 2048, 00:17:19.907 "data_size": 63488 00:17:19.907 }, 00:17:19.907 { 00:17:19.907 "name": "BaseBdev2", 00:17:19.907 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:19.907 "is_configured": true, 00:17:19.907 "data_offset": 2048, 00:17:19.907 "data_size": 63488 00:17:19.907 }, 00:17:19.907 { 00:17:19.907 "name": "BaseBdev3", 00:17:19.907 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:19.907 "is_configured": true, 00:17:19.907 "data_offset": 2048, 00:17:19.907 "data_size": 63488 00:17:19.907 }, 00:17:19.907 { 00:17:19.907 "name": "BaseBdev4", 00:17:19.907 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:19.907 "is_configured": true, 00:17:19.907 "data_offset": 2048, 00:17:19.907 "data_size": 63488 00:17:19.907 } 00:17:19.907 ] 00:17:19.907 }' 00:17:19.907 16:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.907 16:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.907 16:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.907 16:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.908 16:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.847 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.847 "name": "raid_bdev1", 00:17:20.847 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:20.847 "strip_size_kb": 64, 00:17:20.847 "state": "online", 00:17:20.847 "raid_level": "raid5f", 00:17:20.847 "superblock": true, 00:17:20.847 "num_base_bdevs": 4, 00:17:20.847 "num_base_bdevs_discovered": 4, 00:17:20.847 "num_base_bdevs_operational": 4, 00:17:20.847 "process": { 00:17:20.847 "type": "rebuild", 00:17:20.847 "target": "spare", 00:17:20.847 "progress": { 00:17:20.847 "blocks": 109440, 00:17:20.847 "percent": 57 00:17:20.847 } 00:17:20.847 }, 00:17:20.847 "base_bdevs_list": [ 00:17:20.847 { 00:17:20.847 "name": "spare", 00:17:20.847 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:20.848 "is_configured": true, 00:17:20.848 "data_offset": 2048, 00:17:20.848 "data_size": 63488 00:17:20.848 }, 00:17:20.848 { 00:17:20.848 "name": "BaseBdev2", 00:17:20.848 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:20.848 "is_configured": true, 00:17:20.848 "data_offset": 2048, 00:17:20.848 "data_size": 63488 00:17:20.848 }, 00:17:20.848 { 00:17:20.848 "name": "BaseBdev3", 00:17:20.848 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:20.848 "is_configured": true, 00:17:20.848 "data_offset": 2048, 00:17:20.848 "data_size": 63488 00:17:20.848 }, 00:17:20.848 { 00:17:20.848 "name": "BaseBdev4", 00:17:20.848 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:20.848 "is_configured": true, 00:17:20.848 "data_offset": 2048, 00:17:20.848 "data_size": 63488 00:17:20.848 } 00:17:20.848 ] 00:17:20.848 }' 00:17:20.848 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.107 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.107 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.107 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.107 16:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.045 "name": "raid_bdev1", 00:17:22.045 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:22.045 "strip_size_kb": 64, 00:17:22.045 "state": "online", 00:17:22.045 "raid_level": "raid5f", 00:17:22.045 "superblock": true, 00:17:22.045 "num_base_bdevs": 4, 00:17:22.045 "num_base_bdevs_discovered": 4, 00:17:22.045 "num_base_bdevs_operational": 4, 00:17:22.045 "process": { 00:17:22.045 "type": "rebuild", 00:17:22.045 "target": "spare", 00:17:22.045 "progress": { 00:17:22.045 "blocks": 130560, 00:17:22.045 "percent": 68 00:17:22.045 } 00:17:22.045 }, 00:17:22.045 "base_bdevs_list": [ 00:17:22.045 { 00:17:22.045 "name": "spare", 00:17:22.045 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:22.045 "is_configured": true, 00:17:22.045 "data_offset": 2048, 00:17:22.045 "data_size": 63488 00:17:22.045 }, 00:17:22.045 { 00:17:22.045 "name": "BaseBdev2", 00:17:22.045 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:22.045 "is_configured": true, 00:17:22.045 "data_offset": 2048, 00:17:22.045 "data_size": 63488 00:17:22.045 }, 00:17:22.045 { 00:17:22.045 "name": "BaseBdev3", 00:17:22.045 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:22.045 "is_configured": true, 00:17:22.045 "data_offset": 2048, 00:17:22.045 "data_size": 63488 00:17:22.045 }, 00:17:22.045 { 00:17:22.045 "name": "BaseBdev4", 00:17:22.045 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:22.045 "is_configured": true, 00:17:22.045 "data_offset": 2048, 00:17:22.045 "data_size": 63488 00:17:22.045 } 00:17:22.045 ] 00:17:22.045 }' 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.045 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.304 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.304 16:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.243 "name": "raid_bdev1", 00:17:23.243 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:23.243 "strip_size_kb": 64, 00:17:23.243 "state": "online", 00:17:23.243 "raid_level": "raid5f", 00:17:23.243 "superblock": true, 00:17:23.243 "num_base_bdevs": 4, 00:17:23.243 "num_base_bdevs_discovered": 4, 00:17:23.243 "num_base_bdevs_operational": 4, 00:17:23.243 "process": { 00:17:23.243 "type": "rebuild", 00:17:23.243 "target": "spare", 00:17:23.243 "progress": { 00:17:23.243 "blocks": 151680, 00:17:23.243 "percent": 79 00:17:23.243 } 00:17:23.243 }, 00:17:23.243 "base_bdevs_list": [ 00:17:23.243 { 00:17:23.243 "name": "spare", 00:17:23.243 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:23.243 "is_configured": true, 00:17:23.243 "data_offset": 2048, 00:17:23.243 "data_size": 63488 00:17:23.243 }, 00:17:23.243 { 00:17:23.243 "name": "BaseBdev2", 00:17:23.243 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:23.243 "is_configured": true, 00:17:23.243 "data_offset": 2048, 00:17:23.243 "data_size": 63488 00:17:23.243 }, 00:17:23.243 { 00:17:23.243 "name": "BaseBdev3", 00:17:23.243 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:23.243 "is_configured": true, 00:17:23.243 "data_offset": 2048, 00:17:23.243 "data_size": 63488 00:17:23.243 }, 00:17:23.243 { 00:17:23.243 "name": "BaseBdev4", 00:17:23.243 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:23.243 "is_configured": true, 00:17:23.243 "data_offset": 2048, 00:17:23.243 "data_size": 63488 00:17:23.243 } 00:17:23.243 ] 00:17:23.243 }' 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.243 16:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.623 "name": "raid_bdev1", 00:17:24.623 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:24.623 "strip_size_kb": 64, 00:17:24.623 "state": "online", 00:17:24.623 "raid_level": "raid5f", 00:17:24.623 "superblock": true, 00:17:24.623 "num_base_bdevs": 4, 00:17:24.623 "num_base_bdevs_discovered": 4, 00:17:24.623 "num_base_bdevs_operational": 4, 00:17:24.623 "process": { 00:17:24.623 "type": "rebuild", 00:17:24.623 "target": "spare", 00:17:24.623 "progress": { 00:17:24.623 "blocks": 174720, 00:17:24.623 "percent": 91 00:17:24.623 } 00:17:24.623 }, 00:17:24.623 "base_bdevs_list": [ 00:17:24.623 { 00:17:24.623 "name": "spare", 00:17:24.623 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:24.623 "is_configured": true, 00:17:24.623 "data_offset": 2048, 00:17:24.623 "data_size": 63488 00:17:24.623 }, 00:17:24.623 { 00:17:24.623 "name": "BaseBdev2", 00:17:24.623 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:24.623 "is_configured": true, 00:17:24.623 "data_offset": 2048, 00:17:24.623 "data_size": 63488 00:17:24.623 }, 00:17:24.623 { 00:17:24.623 "name": "BaseBdev3", 00:17:24.623 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:24.623 "is_configured": true, 00:17:24.623 "data_offset": 2048, 00:17:24.623 "data_size": 63488 00:17:24.623 }, 00:17:24.623 { 00:17:24.623 "name": "BaseBdev4", 00:17:24.623 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:24.623 "is_configured": true, 00:17:24.623 "data_offset": 2048, 00:17:24.623 "data_size": 63488 00:17:24.623 } 00:17:24.623 ] 00:17:24.623 }' 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.623 16:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.193 [2024-12-12 16:13:51.436547] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:25.193 [2024-12-12 16:13:51.436650] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:25.193 [2024-12-12 16:13:51.436806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.453 "name": "raid_bdev1", 00:17:25.453 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:25.453 "strip_size_kb": 64, 00:17:25.453 "state": "online", 00:17:25.453 "raid_level": "raid5f", 00:17:25.453 "superblock": true, 00:17:25.453 "num_base_bdevs": 4, 00:17:25.453 "num_base_bdevs_discovered": 4, 00:17:25.453 "num_base_bdevs_operational": 4, 00:17:25.453 "base_bdevs_list": [ 00:17:25.453 { 00:17:25.453 "name": "spare", 00:17:25.453 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:25.453 "is_configured": true, 00:17:25.453 "data_offset": 2048, 00:17:25.453 "data_size": 63488 00:17:25.453 }, 00:17:25.453 { 00:17:25.453 "name": "BaseBdev2", 00:17:25.453 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:25.453 "is_configured": true, 00:17:25.453 "data_offset": 2048, 00:17:25.453 "data_size": 63488 00:17:25.453 }, 00:17:25.453 { 00:17:25.453 "name": "BaseBdev3", 00:17:25.453 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:25.453 "is_configured": true, 00:17:25.453 "data_offset": 2048, 00:17:25.453 "data_size": 63488 00:17:25.453 }, 00:17:25.453 { 00:17:25.453 "name": "BaseBdev4", 00:17:25.453 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:25.453 "is_configured": true, 00:17:25.453 "data_offset": 2048, 00:17:25.453 "data_size": 63488 00:17:25.453 } 00:17:25.453 ] 00:17:25.453 }' 00:17:25.453 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.713 "name": "raid_bdev1", 00:17:25.713 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:25.713 "strip_size_kb": 64, 00:17:25.713 "state": "online", 00:17:25.713 "raid_level": "raid5f", 00:17:25.713 "superblock": true, 00:17:25.713 "num_base_bdevs": 4, 00:17:25.713 "num_base_bdevs_discovered": 4, 00:17:25.713 "num_base_bdevs_operational": 4, 00:17:25.713 "base_bdevs_list": [ 00:17:25.713 { 00:17:25.713 "name": "spare", 00:17:25.713 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:25.713 "is_configured": true, 00:17:25.713 "data_offset": 2048, 00:17:25.713 "data_size": 63488 00:17:25.713 }, 00:17:25.713 { 00:17:25.713 "name": "BaseBdev2", 00:17:25.713 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:25.713 "is_configured": true, 00:17:25.713 "data_offset": 2048, 00:17:25.713 "data_size": 63488 00:17:25.713 }, 00:17:25.713 { 00:17:25.713 "name": "BaseBdev3", 00:17:25.713 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:25.713 "is_configured": true, 00:17:25.713 "data_offset": 2048, 00:17:25.713 "data_size": 63488 00:17:25.713 }, 00:17:25.713 { 00:17:25.713 "name": "BaseBdev4", 00:17:25.713 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:25.713 "is_configured": true, 00:17:25.713 "data_offset": 2048, 00:17:25.713 "data_size": 63488 00:17:25.713 } 00:17:25.713 ] 00:17:25.713 }' 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.713 16:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.713 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.713 "name": "raid_bdev1", 00:17:25.713 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:25.713 "strip_size_kb": 64, 00:17:25.713 "state": "online", 00:17:25.713 "raid_level": "raid5f", 00:17:25.713 "superblock": true, 00:17:25.713 "num_base_bdevs": 4, 00:17:25.713 "num_base_bdevs_discovered": 4, 00:17:25.713 "num_base_bdevs_operational": 4, 00:17:25.713 "base_bdevs_list": [ 00:17:25.713 { 00:17:25.713 "name": "spare", 00:17:25.714 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:25.714 "is_configured": true, 00:17:25.714 "data_offset": 2048, 00:17:25.714 "data_size": 63488 00:17:25.714 }, 00:17:25.714 { 00:17:25.714 "name": "BaseBdev2", 00:17:25.714 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:25.714 "is_configured": true, 00:17:25.714 "data_offset": 2048, 00:17:25.714 "data_size": 63488 00:17:25.714 }, 00:17:25.714 { 00:17:25.714 "name": "BaseBdev3", 00:17:25.714 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:25.714 "is_configured": true, 00:17:25.714 "data_offset": 2048, 00:17:25.714 "data_size": 63488 00:17:25.714 }, 00:17:25.714 { 00:17:25.714 "name": "BaseBdev4", 00:17:25.714 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:25.714 "is_configured": true, 00:17:25.714 "data_offset": 2048, 00:17:25.714 "data_size": 63488 00:17:25.714 } 00:17:25.714 ] 00:17:25.714 }' 00:17:25.973 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.973 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.233 [2024-12-12 16:13:52.503724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.233 [2024-12-12 16:13:52.503861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.233 [2024-12-12 16:13:52.503986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.233 [2024-12-12 16:13:52.504116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.233 [2024-12-12 16:13:52.504188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.233 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:26.493 /dev/nbd0 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.493 1+0 records in 00:17:26.493 1+0 records out 00:17:26.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545228 s, 7.5 MB/s 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.493 16:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:26.753 /dev/nbd1 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.753 1+0 records in 00:17:26.753 1+0 records out 00:17:26.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506184 s, 8.1 MB/s 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.753 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:27.013 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:27.013 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.013 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:27.013 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:27.013 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:27.013 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.013 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:27.272 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:27.272 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:27.272 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:27.272 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.272 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.273 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:27.273 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:27.273 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.273 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.273 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:27.532 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:27.532 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:27.532 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:27.532 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.532 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.533 [2024-12-12 16:13:53.717628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:27.533 [2024-12-12 16:13:53.717700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.533 [2024-12-12 16:13:53.717728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:27.533 [2024-12-12 16:13:53.717739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.533 [2024-12-12 16:13:53.720446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.533 spare 00:17:27.533 [2024-12-12 16:13:53.720548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:27.533 [2024-12-12 16:13:53.720678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:27.533 [2024-12-12 16:13:53.720749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.533 [2024-12-12 16:13:53.720961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.533 [2024-12-12 16:13:53.721088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.533 [2024-12-12 16:13:53.721184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.533 [2024-12-12 16:13:53.821095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:27.533 [2024-12-12 16:13:53.821132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:27.533 [2024-12-12 16:13:53.821428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:27.533 [2024-12-12 16:13:53.828801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:27.533 [2024-12-12 16:13:53.828822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:27.533 [2024-12-12 16:13:53.829061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.533 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.792 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.792 "name": "raid_bdev1", 00:17:27.792 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:27.792 "strip_size_kb": 64, 00:17:27.792 "state": "online", 00:17:27.792 "raid_level": "raid5f", 00:17:27.792 "superblock": true, 00:17:27.792 "num_base_bdevs": 4, 00:17:27.792 "num_base_bdevs_discovered": 4, 00:17:27.792 "num_base_bdevs_operational": 4, 00:17:27.792 "base_bdevs_list": [ 00:17:27.792 { 00:17:27.792 "name": "spare", 00:17:27.792 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:27.792 "is_configured": true, 00:17:27.792 "data_offset": 2048, 00:17:27.792 "data_size": 63488 00:17:27.792 }, 00:17:27.792 { 00:17:27.792 "name": "BaseBdev2", 00:17:27.792 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:27.792 "is_configured": true, 00:17:27.792 "data_offset": 2048, 00:17:27.792 "data_size": 63488 00:17:27.792 }, 00:17:27.792 { 00:17:27.792 "name": "BaseBdev3", 00:17:27.792 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:27.792 "is_configured": true, 00:17:27.792 "data_offset": 2048, 00:17:27.792 "data_size": 63488 00:17:27.792 }, 00:17:27.792 { 00:17:27.792 "name": "BaseBdev4", 00:17:27.792 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:27.792 "is_configured": true, 00:17:27.792 "data_offset": 2048, 00:17:27.792 "data_size": 63488 00:17:27.792 } 00:17:27.792 ] 00:17:27.792 }' 00:17:27.792 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.792 16:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.051 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.051 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.051 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.051 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.051 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.051 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.051 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.052 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.052 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.052 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.052 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.052 "name": "raid_bdev1", 00:17:28.052 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:28.052 "strip_size_kb": 64, 00:17:28.052 "state": "online", 00:17:28.052 "raid_level": "raid5f", 00:17:28.052 "superblock": true, 00:17:28.052 "num_base_bdevs": 4, 00:17:28.052 "num_base_bdevs_discovered": 4, 00:17:28.052 "num_base_bdevs_operational": 4, 00:17:28.052 "base_bdevs_list": [ 00:17:28.052 { 00:17:28.052 "name": "spare", 00:17:28.052 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:28.052 "is_configured": true, 00:17:28.052 "data_offset": 2048, 00:17:28.052 "data_size": 63488 00:17:28.052 }, 00:17:28.052 { 00:17:28.052 "name": "BaseBdev2", 00:17:28.052 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:28.052 "is_configured": true, 00:17:28.052 "data_offset": 2048, 00:17:28.052 "data_size": 63488 00:17:28.052 }, 00:17:28.052 { 00:17:28.052 "name": "BaseBdev3", 00:17:28.052 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:28.052 "is_configured": true, 00:17:28.052 "data_offset": 2048, 00:17:28.052 "data_size": 63488 00:17:28.052 }, 00:17:28.052 { 00:17:28.052 "name": "BaseBdev4", 00:17:28.052 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:28.052 "is_configured": true, 00:17:28.052 "data_offset": 2048, 00:17:28.052 "data_size": 63488 00:17:28.052 } 00:17:28.052 ] 00:17:28.052 }' 00:17:28.052 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.312 [2024-12-12 16:13:54.485091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.312 "name": "raid_bdev1", 00:17:28.312 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:28.312 "strip_size_kb": 64, 00:17:28.312 "state": "online", 00:17:28.312 "raid_level": "raid5f", 00:17:28.312 "superblock": true, 00:17:28.312 "num_base_bdevs": 4, 00:17:28.312 "num_base_bdevs_discovered": 3, 00:17:28.312 "num_base_bdevs_operational": 3, 00:17:28.312 "base_bdevs_list": [ 00:17:28.312 { 00:17:28.312 "name": null, 00:17:28.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.312 "is_configured": false, 00:17:28.312 "data_offset": 0, 00:17:28.312 "data_size": 63488 00:17:28.312 }, 00:17:28.312 { 00:17:28.312 "name": "BaseBdev2", 00:17:28.312 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:28.312 "is_configured": true, 00:17:28.312 "data_offset": 2048, 00:17:28.312 "data_size": 63488 00:17:28.312 }, 00:17:28.312 { 00:17:28.312 "name": "BaseBdev3", 00:17:28.312 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:28.312 "is_configured": true, 00:17:28.312 "data_offset": 2048, 00:17:28.312 "data_size": 63488 00:17:28.312 }, 00:17:28.312 { 00:17:28.312 "name": "BaseBdev4", 00:17:28.312 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:28.312 "is_configured": true, 00:17:28.312 "data_offset": 2048, 00:17:28.312 "data_size": 63488 00:17:28.312 } 00:17:28.312 ] 00:17:28.312 }' 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.312 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.571 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:28.571 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.571 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.830 [2024-12-12 16:13:54.924403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.830 [2024-12-12 16:13:54.924680] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:28.830 [2024-12-12 16:13:54.924758] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:28.830 [2024-12-12 16:13:54.924830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.830 [2024-12-12 16:13:54.941075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:28.830 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.830 16:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:28.830 [2024-12-12 16:13:54.950390] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.770 16:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.770 "name": "raid_bdev1", 00:17:29.770 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:29.770 "strip_size_kb": 64, 00:17:29.770 "state": "online", 00:17:29.770 "raid_level": "raid5f", 00:17:29.770 "superblock": true, 00:17:29.770 "num_base_bdevs": 4, 00:17:29.770 "num_base_bdevs_discovered": 4, 00:17:29.770 "num_base_bdevs_operational": 4, 00:17:29.770 "process": { 00:17:29.770 "type": "rebuild", 00:17:29.770 "target": "spare", 00:17:29.770 "progress": { 00:17:29.770 "blocks": 19200, 00:17:29.770 "percent": 10 00:17:29.770 } 00:17:29.770 }, 00:17:29.770 "base_bdevs_list": [ 00:17:29.770 { 00:17:29.770 "name": "spare", 00:17:29.770 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:29.770 "is_configured": true, 00:17:29.770 "data_offset": 2048, 00:17:29.770 "data_size": 63488 00:17:29.770 }, 00:17:29.770 { 00:17:29.770 "name": "BaseBdev2", 00:17:29.770 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:29.770 "is_configured": true, 00:17:29.770 "data_offset": 2048, 00:17:29.770 "data_size": 63488 00:17:29.770 }, 00:17:29.770 { 00:17:29.770 "name": "BaseBdev3", 00:17:29.770 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:29.770 "is_configured": true, 00:17:29.770 "data_offset": 2048, 00:17:29.770 "data_size": 63488 00:17:29.770 }, 00:17:29.770 { 00:17:29.770 "name": "BaseBdev4", 00:17:29.770 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:29.770 "is_configured": true, 00:17:29.770 "data_offset": 2048, 00:17:29.770 "data_size": 63488 00:17:29.770 } 00:17:29.770 ] 00:17:29.770 }' 00:17:29.770 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.770 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.770 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.770 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.770 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:29.770 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.770 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.770 [2024-12-12 16:13:56.101165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.031 [2024-12-12 16:13:56.158416] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.031 [2024-12-12 16:13:56.158476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.031 [2024-12-12 16:13:56.158493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.031 [2024-12-12 16:13:56.158502] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.031 "name": "raid_bdev1", 00:17:30.031 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:30.031 "strip_size_kb": 64, 00:17:30.031 "state": "online", 00:17:30.031 "raid_level": "raid5f", 00:17:30.031 "superblock": true, 00:17:30.031 "num_base_bdevs": 4, 00:17:30.031 "num_base_bdevs_discovered": 3, 00:17:30.031 "num_base_bdevs_operational": 3, 00:17:30.031 "base_bdevs_list": [ 00:17:30.031 { 00:17:30.031 "name": null, 00:17:30.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.031 "is_configured": false, 00:17:30.031 "data_offset": 0, 00:17:30.031 "data_size": 63488 00:17:30.031 }, 00:17:30.031 { 00:17:30.031 "name": "BaseBdev2", 00:17:30.031 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:30.031 "is_configured": true, 00:17:30.031 "data_offset": 2048, 00:17:30.031 "data_size": 63488 00:17:30.031 }, 00:17:30.031 { 00:17:30.031 "name": "BaseBdev3", 00:17:30.031 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:30.031 "is_configured": true, 00:17:30.031 "data_offset": 2048, 00:17:30.031 "data_size": 63488 00:17:30.031 }, 00:17:30.031 { 00:17:30.031 "name": "BaseBdev4", 00:17:30.031 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:30.031 "is_configured": true, 00:17:30.031 "data_offset": 2048, 00:17:30.031 "data_size": 63488 00:17:30.031 } 00:17:30.031 ] 00:17:30.031 }' 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.031 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.298 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:30.298 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.298 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.298 [2024-12-12 16:13:56.586752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:30.298 [2024-12-12 16:13:56.586878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.298 [2024-12-12 16:13:56.586934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:30.298 [2024-12-12 16:13:56.586986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.298 [2024-12-12 16:13:56.587549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.298 [2024-12-12 16:13:56.587638] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:30.299 [2024-12-12 16:13:56.587778] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:30.299 [2024-12-12 16:13:56.587827] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:30.299 [2024-12-12 16:13:56.587888] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:30.299 [2024-12-12 16:13:56.587959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:30.299 [2024-12-12 16:13:56.603185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:30.299 spare 00:17:30.299 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.299 16:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:30.299 [2024-12-12 16:13:56.612057] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.685 "name": "raid_bdev1", 00:17:31.685 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:31.685 "strip_size_kb": 64, 00:17:31.685 "state": "online", 00:17:31.685 "raid_level": "raid5f", 00:17:31.685 "superblock": true, 00:17:31.685 "num_base_bdevs": 4, 00:17:31.685 "num_base_bdevs_discovered": 4, 00:17:31.685 "num_base_bdevs_operational": 4, 00:17:31.685 "process": { 00:17:31.685 "type": "rebuild", 00:17:31.685 "target": "spare", 00:17:31.685 "progress": { 00:17:31.685 "blocks": 19200, 00:17:31.685 "percent": 10 00:17:31.685 } 00:17:31.685 }, 00:17:31.685 "base_bdevs_list": [ 00:17:31.685 { 00:17:31.685 "name": "spare", 00:17:31.685 "uuid": "a746c7d7-5d45-53b6-a7ba-09519adebe4b", 00:17:31.685 "is_configured": true, 00:17:31.685 "data_offset": 2048, 00:17:31.685 "data_size": 63488 00:17:31.685 }, 00:17:31.685 { 00:17:31.685 "name": "BaseBdev2", 00:17:31.685 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:31.685 "is_configured": true, 00:17:31.685 "data_offset": 2048, 00:17:31.685 "data_size": 63488 00:17:31.685 }, 00:17:31.685 { 00:17:31.685 "name": "BaseBdev3", 00:17:31.685 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:31.685 "is_configured": true, 00:17:31.685 "data_offset": 2048, 00:17:31.685 "data_size": 63488 00:17:31.685 }, 00:17:31.685 { 00:17:31.685 "name": "BaseBdev4", 00:17:31.685 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:31.685 "is_configured": true, 00:17:31.685 "data_offset": 2048, 00:17:31.685 "data_size": 63488 00:17:31.685 } 00:17:31.685 ] 00:17:31.685 }' 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.685 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.685 [2024-12-12 16:13:57.751804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.686 [2024-12-12 16:13:57.819204] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:31.686 [2024-12-12 16:13:57.819255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.686 [2024-12-12 16:13:57.819290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.686 [2024-12-12 16:13:57.819297] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.686 "name": "raid_bdev1", 00:17:31.686 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:31.686 "strip_size_kb": 64, 00:17:31.686 "state": "online", 00:17:31.686 "raid_level": "raid5f", 00:17:31.686 "superblock": true, 00:17:31.686 "num_base_bdevs": 4, 00:17:31.686 "num_base_bdevs_discovered": 3, 00:17:31.686 "num_base_bdevs_operational": 3, 00:17:31.686 "base_bdevs_list": [ 00:17:31.686 { 00:17:31.686 "name": null, 00:17:31.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.686 "is_configured": false, 00:17:31.686 "data_offset": 0, 00:17:31.686 "data_size": 63488 00:17:31.686 }, 00:17:31.686 { 00:17:31.686 "name": "BaseBdev2", 00:17:31.686 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:31.686 "is_configured": true, 00:17:31.686 "data_offset": 2048, 00:17:31.686 "data_size": 63488 00:17:31.686 }, 00:17:31.686 { 00:17:31.686 "name": "BaseBdev3", 00:17:31.686 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:31.686 "is_configured": true, 00:17:31.686 "data_offset": 2048, 00:17:31.686 "data_size": 63488 00:17:31.686 }, 00:17:31.686 { 00:17:31.686 "name": "BaseBdev4", 00:17:31.686 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:31.686 "is_configured": true, 00:17:31.686 "data_offset": 2048, 00:17:31.686 "data_size": 63488 00:17:31.686 } 00:17:31.686 ] 00:17:31.686 }' 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.686 16:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.945 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.946 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.205 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.205 "name": "raid_bdev1", 00:17:32.205 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:32.205 "strip_size_kb": 64, 00:17:32.205 "state": "online", 00:17:32.205 "raid_level": "raid5f", 00:17:32.205 "superblock": true, 00:17:32.205 "num_base_bdevs": 4, 00:17:32.205 "num_base_bdevs_discovered": 3, 00:17:32.205 "num_base_bdevs_operational": 3, 00:17:32.205 "base_bdevs_list": [ 00:17:32.205 { 00:17:32.205 "name": null, 00:17:32.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.205 "is_configured": false, 00:17:32.205 "data_offset": 0, 00:17:32.205 "data_size": 63488 00:17:32.205 }, 00:17:32.205 { 00:17:32.205 "name": "BaseBdev2", 00:17:32.205 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:32.205 "is_configured": true, 00:17:32.205 "data_offset": 2048, 00:17:32.205 "data_size": 63488 00:17:32.205 }, 00:17:32.205 { 00:17:32.205 "name": "BaseBdev3", 00:17:32.205 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:32.205 "is_configured": true, 00:17:32.205 "data_offset": 2048, 00:17:32.205 "data_size": 63488 00:17:32.205 }, 00:17:32.205 { 00:17:32.205 "name": "BaseBdev4", 00:17:32.205 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:32.205 "is_configured": true, 00:17:32.205 "data_offset": 2048, 00:17:32.205 "data_size": 63488 00:17:32.205 } 00:17:32.205 ] 00:17:32.205 }' 00:17:32.205 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.205 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.206 [2024-12-12 16:13:58.375570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:32.206 [2024-12-12 16:13:58.375627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.206 [2024-12-12 16:13:58.375649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:32.206 [2024-12-12 16:13:58.375675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.206 [2024-12-12 16:13:58.376134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.206 [2024-12-12 16:13:58.376154] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:32.206 [2024-12-12 16:13:58.376247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:32.206 [2024-12-12 16:13:58.376262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:32.206 [2024-12-12 16:13:58.376274] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:32.206 [2024-12-12 16:13:58.376284] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:32.206 BaseBdev1 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.206 16:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.145 "name": "raid_bdev1", 00:17:33.145 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:33.145 "strip_size_kb": 64, 00:17:33.145 "state": "online", 00:17:33.145 "raid_level": "raid5f", 00:17:33.145 "superblock": true, 00:17:33.145 "num_base_bdevs": 4, 00:17:33.145 "num_base_bdevs_discovered": 3, 00:17:33.145 "num_base_bdevs_operational": 3, 00:17:33.145 "base_bdevs_list": [ 00:17:33.145 { 00:17:33.145 "name": null, 00:17:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.145 "is_configured": false, 00:17:33.145 "data_offset": 0, 00:17:33.145 "data_size": 63488 00:17:33.145 }, 00:17:33.145 { 00:17:33.145 "name": "BaseBdev2", 00:17:33.145 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:33.145 "is_configured": true, 00:17:33.145 "data_offset": 2048, 00:17:33.145 "data_size": 63488 00:17:33.145 }, 00:17:33.145 { 00:17:33.145 "name": "BaseBdev3", 00:17:33.145 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:33.145 "is_configured": true, 00:17:33.145 "data_offset": 2048, 00:17:33.145 "data_size": 63488 00:17:33.145 }, 00:17:33.145 { 00:17:33.145 "name": "BaseBdev4", 00:17:33.145 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:33.145 "is_configured": true, 00:17:33.145 "data_offset": 2048, 00:17:33.145 "data_size": 63488 00:17:33.145 } 00:17:33.145 ] 00:17:33.145 }' 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.145 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.714 "name": "raid_bdev1", 00:17:33.714 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:33.714 "strip_size_kb": 64, 00:17:33.714 "state": "online", 00:17:33.714 "raid_level": "raid5f", 00:17:33.714 "superblock": true, 00:17:33.714 "num_base_bdevs": 4, 00:17:33.714 "num_base_bdevs_discovered": 3, 00:17:33.714 "num_base_bdevs_operational": 3, 00:17:33.714 "base_bdevs_list": [ 00:17:33.714 { 00:17:33.714 "name": null, 00:17:33.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.714 "is_configured": false, 00:17:33.714 "data_offset": 0, 00:17:33.714 "data_size": 63488 00:17:33.714 }, 00:17:33.714 { 00:17:33.714 "name": "BaseBdev2", 00:17:33.714 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:33.714 "is_configured": true, 00:17:33.714 "data_offset": 2048, 00:17:33.714 "data_size": 63488 00:17:33.714 }, 00:17:33.714 { 00:17:33.714 "name": "BaseBdev3", 00:17:33.714 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:33.714 "is_configured": true, 00:17:33.714 "data_offset": 2048, 00:17:33.714 "data_size": 63488 00:17:33.714 }, 00:17:33.714 { 00:17:33.714 "name": "BaseBdev4", 00:17:33.714 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:33.714 "is_configured": true, 00:17:33.714 "data_offset": 2048, 00:17:33.714 "data_size": 63488 00:17:33.714 } 00:17:33.714 ] 00:17:33.714 }' 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.714 [2024-12-12 16:13:59.953014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.714 [2024-12-12 16:13:59.953260] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:33.714 [2024-12-12 16:13:59.953324] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:33.714 request: 00:17:33.714 { 00:17:33.714 "base_bdev": "BaseBdev1", 00:17:33.714 "raid_bdev": "raid_bdev1", 00:17:33.714 "method": "bdev_raid_add_base_bdev", 00:17:33.714 "req_id": 1 00:17:33.714 } 00:17:33.714 Got JSON-RPC error response 00:17:33.714 response: 00:17:33.714 { 00:17:33.714 "code": -22, 00:17:33.714 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:33.714 } 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.714 16:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 16:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.911 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.911 "name": "raid_bdev1", 00:17:34.911 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:34.911 "strip_size_kb": 64, 00:17:34.911 "state": "online", 00:17:34.911 "raid_level": "raid5f", 00:17:34.911 "superblock": true, 00:17:34.911 "num_base_bdevs": 4, 00:17:34.911 "num_base_bdevs_discovered": 3, 00:17:34.911 "num_base_bdevs_operational": 3, 00:17:34.911 "base_bdevs_list": [ 00:17:34.911 { 00:17:34.911 "name": null, 00:17:34.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.911 "is_configured": false, 00:17:34.911 "data_offset": 0, 00:17:34.911 "data_size": 63488 00:17:34.911 }, 00:17:34.911 { 00:17:34.911 "name": "BaseBdev2", 00:17:34.911 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:34.911 "is_configured": true, 00:17:34.911 "data_offset": 2048, 00:17:34.911 "data_size": 63488 00:17:34.911 }, 00:17:34.911 { 00:17:34.911 "name": "BaseBdev3", 00:17:34.911 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:34.911 "is_configured": true, 00:17:34.911 "data_offset": 2048, 00:17:34.911 "data_size": 63488 00:17:34.911 }, 00:17:34.911 { 00:17:34.911 "name": "BaseBdev4", 00:17:34.911 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:34.911 "is_configured": true, 00:17:34.911 "data_offset": 2048, 00:17:34.911 "data_size": 63488 00:17:34.911 } 00:17:34.911 ] 00:17:34.911 }' 00:17:34.911 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.911 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.170 "name": "raid_bdev1", 00:17:35.170 "uuid": "026fad59-808f-4b46-bb50-f4da73889a1c", 00:17:35.170 "strip_size_kb": 64, 00:17:35.170 "state": "online", 00:17:35.170 "raid_level": "raid5f", 00:17:35.170 "superblock": true, 00:17:35.170 "num_base_bdevs": 4, 00:17:35.170 "num_base_bdevs_discovered": 3, 00:17:35.170 "num_base_bdevs_operational": 3, 00:17:35.170 "base_bdevs_list": [ 00:17:35.170 { 00:17:35.170 "name": null, 00:17:35.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.170 "is_configured": false, 00:17:35.170 "data_offset": 0, 00:17:35.170 "data_size": 63488 00:17:35.170 }, 00:17:35.170 { 00:17:35.170 "name": "BaseBdev2", 00:17:35.170 "uuid": "1fe8f748-fd75-5726-8721-5daacf1bb2ea", 00:17:35.170 "is_configured": true, 00:17:35.170 "data_offset": 2048, 00:17:35.170 "data_size": 63488 00:17:35.170 }, 00:17:35.170 { 00:17:35.170 "name": "BaseBdev3", 00:17:35.170 "uuid": "0491dd25-dd82-582d-9640-f7b996e25590", 00:17:35.170 "is_configured": true, 00:17:35.170 "data_offset": 2048, 00:17:35.170 "data_size": 63488 00:17:35.170 }, 00:17:35.170 { 00:17:35.170 "name": "BaseBdev4", 00:17:35.170 "uuid": "2c0d6cc9-d9a3-544f-979c-65ca19e37109", 00:17:35.170 "is_configured": true, 00:17:35.170 "data_offset": 2048, 00:17:35.170 "data_size": 63488 00:17:35.170 } 00:17:35.170 ] 00:17:35.170 }' 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 87200 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 87200 ']' 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 87200 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.170 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87200 00:17:35.430 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.430 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.430 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87200' 00:17:35.430 killing process with pid 87200 00:17:35.430 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 87200 00:17:35.430 Received shutdown signal, test time was about 60.000000 seconds 00:17:35.430 00:17:35.430 Latency(us) 00:17:35.430 [2024-12-12T16:14:01.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.430 [2024-12-12T16:14:01.782Z] =================================================================================================================== 00:17:35.430 [2024-12-12T16:14:01.782Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.430 [2024-12-12 16:14:01.522708] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:35.430 16:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 87200 00:17:35.430 [2024-12-12 16:14:01.522870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.430 [2024-12-12 16:14:01.522972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.430 [2024-12-12 16:14:01.522988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:35.690 [2024-12-12 16:14:01.982441] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.072 16:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:37.072 00:17:37.072 real 0m26.670s 00:17:37.072 user 0m33.330s 00:17:37.072 sys 0m2.968s 00:17:37.072 ************************************ 00:17:37.072 END TEST raid5f_rebuild_test_sb 00:17:37.072 ************************************ 00:17:37.072 16:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.072 16:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 16:14:03 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:37.072 16:14:03 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:37.072 16:14:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:37.072 16:14:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.072 16:14:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 ************************************ 00:17:37.072 START TEST raid_state_function_test_sb_4k 00:17:37.072 ************************************ 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:37.072 Process raid pid: 88009 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=88009 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88009' 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 88009 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 88009 ']' 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.072 16:14:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 [2024-12-12 16:14:03.204443] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:37.072 [2024-12-12 16:14:03.204693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.072 [2024-12-12 16:14:03.384835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.331 [2024-12-12 16:14:03.495190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.591 [2024-12-12 16:14:03.689643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.591 [2024-12-12 16:14:03.689759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.850 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.850 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:37.850 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.851 [2024-12-12 16:14:04.020017] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.851 [2024-12-12 16:14:04.020120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.851 [2024-12-12 16:14:04.020151] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.851 [2024-12-12 16:14:04.020173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.851 "name": "Existed_Raid", 00:17:37.851 "uuid": "60d02e15-ef2d-4de5-aef9-02ea7636cdd3", 00:17:37.851 "strip_size_kb": 0, 00:17:37.851 "state": "configuring", 00:17:37.851 "raid_level": "raid1", 00:17:37.851 "superblock": true, 00:17:37.851 "num_base_bdevs": 2, 00:17:37.851 "num_base_bdevs_discovered": 0, 00:17:37.851 "num_base_bdevs_operational": 2, 00:17:37.851 "base_bdevs_list": [ 00:17:37.851 { 00:17:37.851 "name": "BaseBdev1", 00:17:37.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.851 "is_configured": false, 00:17:37.851 "data_offset": 0, 00:17:37.851 "data_size": 0 00:17:37.851 }, 00:17:37.851 { 00:17:37.851 "name": "BaseBdev2", 00:17:37.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.851 "is_configured": false, 00:17:37.851 "data_offset": 0, 00:17:37.851 "data_size": 0 00:17:37.851 } 00:17:37.851 ] 00:17:37.851 }' 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.851 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 [2024-12-12 16:14:04.519096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.421 [2024-12-12 16:14:04.519130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 [2024-12-12 16:14:04.527069] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.421 [2024-12-12 16:14:04.527108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.421 [2024-12-12 16:14:04.527116] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.421 [2024-12-12 16:14:04.527127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 [2024-12-12 16:14:04.571812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.421 BaseBdev1 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 [ 00:17:38.421 { 00:17:38.421 "name": "BaseBdev1", 00:17:38.421 "aliases": [ 00:17:38.421 "f5f9822f-0f41-4d91-bf27-f122df3aae16" 00:17:38.421 ], 00:17:38.421 "product_name": "Malloc disk", 00:17:38.421 "block_size": 4096, 00:17:38.421 "num_blocks": 8192, 00:17:38.421 "uuid": "f5f9822f-0f41-4d91-bf27-f122df3aae16", 00:17:38.421 "assigned_rate_limits": { 00:17:38.421 "rw_ios_per_sec": 0, 00:17:38.421 "rw_mbytes_per_sec": 0, 00:17:38.421 "r_mbytes_per_sec": 0, 00:17:38.421 "w_mbytes_per_sec": 0 00:17:38.421 }, 00:17:38.421 "claimed": true, 00:17:38.421 "claim_type": "exclusive_write", 00:17:38.421 "zoned": false, 00:17:38.421 "supported_io_types": { 00:17:38.421 "read": true, 00:17:38.421 "write": true, 00:17:38.421 "unmap": true, 00:17:38.421 "flush": true, 00:17:38.421 "reset": true, 00:17:38.421 "nvme_admin": false, 00:17:38.421 "nvme_io": false, 00:17:38.421 "nvme_io_md": false, 00:17:38.421 "write_zeroes": true, 00:17:38.421 "zcopy": true, 00:17:38.421 "get_zone_info": false, 00:17:38.421 "zone_management": false, 00:17:38.421 "zone_append": false, 00:17:38.421 "compare": false, 00:17:38.421 "compare_and_write": false, 00:17:38.421 "abort": true, 00:17:38.421 "seek_hole": false, 00:17:38.421 "seek_data": false, 00:17:38.421 "copy": true, 00:17:38.421 "nvme_iov_md": false 00:17:38.421 }, 00:17:38.421 "memory_domains": [ 00:17:38.421 { 00:17:38.421 "dma_device_id": "system", 00:17:38.421 "dma_device_type": 1 00:17:38.421 }, 00:17:38.421 { 00:17:38.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.421 "dma_device_type": 2 00:17:38.421 } 00:17:38.421 ], 00:17:38.421 "driver_specific": {} 00:17:38.421 } 00:17:38.421 ] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.421 "name": "Existed_Raid", 00:17:38.421 "uuid": "c1449fe9-c5e8-484d-b832-caa7b9eb547f", 00:17:38.421 "strip_size_kb": 0, 00:17:38.421 "state": "configuring", 00:17:38.421 "raid_level": "raid1", 00:17:38.421 "superblock": true, 00:17:38.421 "num_base_bdevs": 2, 00:17:38.421 "num_base_bdevs_discovered": 1, 00:17:38.421 "num_base_bdevs_operational": 2, 00:17:38.421 "base_bdevs_list": [ 00:17:38.421 { 00:17:38.421 "name": "BaseBdev1", 00:17:38.421 "uuid": "f5f9822f-0f41-4d91-bf27-f122df3aae16", 00:17:38.421 "is_configured": true, 00:17:38.421 "data_offset": 256, 00:17:38.421 "data_size": 7936 00:17:38.421 }, 00:17:38.421 { 00:17:38.421 "name": "BaseBdev2", 00:17:38.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.421 "is_configured": false, 00:17:38.421 "data_offset": 0, 00:17:38.421 "data_size": 0 00:17:38.421 } 00:17:38.421 ] 00:17:38.421 }' 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.421 16:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.991 [2024-12-12 16:14:05.043037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.991 [2024-12-12 16:14:05.043078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.991 [2024-12-12 16:14:05.055068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.991 [2024-12-12 16:14:05.056824] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.991 [2024-12-12 16:14:05.056880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.991 "name": "Existed_Raid", 00:17:38.991 "uuid": "212242e4-e161-4639-9845-4a799b470d0c", 00:17:38.991 "strip_size_kb": 0, 00:17:38.991 "state": "configuring", 00:17:38.991 "raid_level": "raid1", 00:17:38.991 "superblock": true, 00:17:38.991 "num_base_bdevs": 2, 00:17:38.991 "num_base_bdevs_discovered": 1, 00:17:38.991 "num_base_bdevs_operational": 2, 00:17:38.991 "base_bdevs_list": [ 00:17:38.991 { 00:17:38.991 "name": "BaseBdev1", 00:17:38.991 "uuid": "f5f9822f-0f41-4d91-bf27-f122df3aae16", 00:17:38.991 "is_configured": true, 00:17:38.991 "data_offset": 256, 00:17:38.991 "data_size": 7936 00:17:38.991 }, 00:17:38.991 { 00:17:38.991 "name": "BaseBdev2", 00:17:38.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.991 "is_configured": false, 00:17:38.991 "data_offset": 0, 00:17:38.991 "data_size": 0 00:17:38.991 } 00:17:38.991 ] 00:17:38.991 }' 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.991 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 [2024-12-12 16:14:05.551126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.251 [2024-12-12 16:14:05.551484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:39.251 [2024-12-12 16:14:05.551539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.251 [2024-12-12 16:14:05.551864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:39.251 BaseBdev2 00:17:39.251 [2024-12-12 16:14:05.552102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:39.251 [2024-12-12 16:14:05.552120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.251 [2024-12-12 16:14:05.552264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.251 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 [ 00:17:39.251 { 00:17:39.251 "name": "BaseBdev2", 00:17:39.251 "aliases": [ 00:17:39.251 "864aaff4-2689-4ff0-9b2a-4853063e0bbc" 00:17:39.251 ], 00:17:39.251 "product_name": "Malloc disk", 00:17:39.251 "block_size": 4096, 00:17:39.251 "num_blocks": 8192, 00:17:39.251 "uuid": "864aaff4-2689-4ff0-9b2a-4853063e0bbc", 00:17:39.251 "assigned_rate_limits": { 00:17:39.251 "rw_ios_per_sec": 0, 00:17:39.251 "rw_mbytes_per_sec": 0, 00:17:39.251 "r_mbytes_per_sec": 0, 00:17:39.251 "w_mbytes_per_sec": 0 00:17:39.251 }, 00:17:39.251 "claimed": true, 00:17:39.251 "claim_type": "exclusive_write", 00:17:39.251 "zoned": false, 00:17:39.251 "supported_io_types": { 00:17:39.251 "read": true, 00:17:39.251 "write": true, 00:17:39.251 "unmap": true, 00:17:39.251 "flush": true, 00:17:39.251 "reset": true, 00:17:39.251 "nvme_admin": false, 00:17:39.251 "nvme_io": false, 00:17:39.251 "nvme_io_md": false, 00:17:39.251 "write_zeroes": true, 00:17:39.251 "zcopy": true, 00:17:39.251 "get_zone_info": false, 00:17:39.251 "zone_management": false, 00:17:39.251 "zone_append": false, 00:17:39.251 "compare": false, 00:17:39.251 "compare_and_write": false, 00:17:39.251 "abort": true, 00:17:39.251 "seek_hole": false, 00:17:39.251 "seek_data": false, 00:17:39.251 "copy": true, 00:17:39.251 "nvme_iov_md": false 00:17:39.251 }, 00:17:39.251 "memory_domains": [ 00:17:39.251 { 00:17:39.251 "dma_device_id": "system", 00:17:39.251 "dma_device_type": 1 00:17:39.251 }, 00:17:39.252 { 00:17:39.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.252 "dma_device_type": 2 00:17:39.252 } 00:17:39.252 ], 00:17:39.252 "driver_specific": {} 00:17:39.252 } 00:17:39.252 ] 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.252 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.512 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.512 "name": "Existed_Raid", 00:17:39.512 "uuid": "212242e4-e161-4639-9845-4a799b470d0c", 00:17:39.512 "strip_size_kb": 0, 00:17:39.512 "state": "online", 00:17:39.512 "raid_level": "raid1", 00:17:39.512 "superblock": true, 00:17:39.512 "num_base_bdevs": 2, 00:17:39.512 "num_base_bdevs_discovered": 2, 00:17:39.512 "num_base_bdevs_operational": 2, 00:17:39.512 "base_bdevs_list": [ 00:17:39.512 { 00:17:39.512 "name": "BaseBdev1", 00:17:39.512 "uuid": "f5f9822f-0f41-4d91-bf27-f122df3aae16", 00:17:39.512 "is_configured": true, 00:17:39.512 "data_offset": 256, 00:17:39.512 "data_size": 7936 00:17:39.512 }, 00:17:39.512 { 00:17:39.512 "name": "BaseBdev2", 00:17:39.512 "uuid": "864aaff4-2689-4ff0-9b2a-4853063e0bbc", 00:17:39.512 "is_configured": true, 00:17:39.512 "data_offset": 256, 00:17:39.512 "data_size": 7936 00:17:39.512 } 00:17:39.512 ] 00:17:39.512 }' 00:17:39.512 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.512 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 [2024-12-12 16:14:05.978661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.772 16:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.772 "name": "Existed_Raid", 00:17:39.772 "aliases": [ 00:17:39.772 "212242e4-e161-4639-9845-4a799b470d0c" 00:17:39.772 ], 00:17:39.772 "product_name": "Raid Volume", 00:17:39.772 "block_size": 4096, 00:17:39.772 "num_blocks": 7936, 00:17:39.772 "uuid": "212242e4-e161-4639-9845-4a799b470d0c", 00:17:39.772 "assigned_rate_limits": { 00:17:39.772 "rw_ios_per_sec": 0, 00:17:39.772 "rw_mbytes_per_sec": 0, 00:17:39.772 "r_mbytes_per_sec": 0, 00:17:39.772 "w_mbytes_per_sec": 0 00:17:39.772 }, 00:17:39.772 "claimed": false, 00:17:39.772 "zoned": false, 00:17:39.772 "supported_io_types": { 00:17:39.772 "read": true, 00:17:39.772 "write": true, 00:17:39.772 "unmap": false, 00:17:39.772 "flush": false, 00:17:39.772 "reset": true, 00:17:39.772 "nvme_admin": false, 00:17:39.772 "nvme_io": false, 00:17:39.772 "nvme_io_md": false, 00:17:39.772 "write_zeroes": true, 00:17:39.772 "zcopy": false, 00:17:39.772 "get_zone_info": false, 00:17:39.772 "zone_management": false, 00:17:39.772 "zone_append": false, 00:17:39.772 "compare": false, 00:17:39.772 "compare_and_write": false, 00:17:39.772 "abort": false, 00:17:39.772 "seek_hole": false, 00:17:39.772 "seek_data": false, 00:17:39.772 "copy": false, 00:17:39.772 "nvme_iov_md": false 00:17:39.772 }, 00:17:39.772 "memory_domains": [ 00:17:39.772 { 00:17:39.772 "dma_device_id": "system", 00:17:39.772 "dma_device_type": 1 00:17:39.772 }, 00:17:39.772 { 00:17:39.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.772 "dma_device_type": 2 00:17:39.772 }, 00:17:39.772 { 00:17:39.772 "dma_device_id": "system", 00:17:39.772 "dma_device_type": 1 00:17:39.772 }, 00:17:39.772 { 00:17:39.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.772 "dma_device_type": 2 00:17:39.772 } 00:17:39.772 ], 00:17:39.772 "driver_specific": { 00:17:39.772 "raid": { 00:17:39.772 "uuid": "212242e4-e161-4639-9845-4a799b470d0c", 00:17:39.772 "strip_size_kb": 0, 00:17:39.772 "state": "online", 00:17:39.772 "raid_level": "raid1", 00:17:39.772 "superblock": true, 00:17:39.772 "num_base_bdevs": 2, 00:17:39.772 "num_base_bdevs_discovered": 2, 00:17:39.772 "num_base_bdevs_operational": 2, 00:17:39.772 "base_bdevs_list": [ 00:17:39.772 { 00:17:39.772 "name": "BaseBdev1", 00:17:39.772 "uuid": "f5f9822f-0f41-4d91-bf27-f122df3aae16", 00:17:39.772 "is_configured": true, 00:17:39.772 "data_offset": 256, 00:17:39.772 "data_size": 7936 00:17:39.772 }, 00:17:39.772 { 00:17:39.772 "name": "BaseBdev2", 00:17:39.772 "uuid": "864aaff4-2689-4ff0-9b2a-4853063e0bbc", 00:17:39.772 "is_configured": true, 00:17:39.772 "data_offset": 256, 00:17:39.772 "data_size": 7936 00:17:39.772 } 00:17:39.772 ] 00:17:39.772 } 00:17:39.772 } 00:17:39.772 }' 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:39.772 BaseBdev2' 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.032 [2024-12-12 16:14:06.206059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:40.032 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.033 "name": "Existed_Raid", 00:17:40.033 "uuid": "212242e4-e161-4639-9845-4a799b470d0c", 00:17:40.033 "strip_size_kb": 0, 00:17:40.033 "state": "online", 00:17:40.033 "raid_level": "raid1", 00:17:40.033 "superblock": true, 00:17:40.033 "num_base_bdevs": 2, 00:17:40.033 "num_base_bdevs_discovered": 1, 00:17:40.033 "num_base_bdevs_operational": 1, 00:17:40.033 "base_bdevs_list": [ 00:17:40.033 { 00:17:40.033 "name": null, 00:17:40.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.033 "is_configured": false, 00:17:40.033 "data_offset": 0, 00:17:40.033 "data_size": 7936 00:17:40.033 }, 00:17:40.033 { 00:17:40.033 "name": "BaseBdev2", 00:17:40.033 "uuid": "864aaff4-2689-4ff0-9b2a-4853063e0bbc", 00:17:40.033 "is_configured": true, 00:17:40.033 "data_offset": 256, 00:17:40.033 "data_size": 7936 00:17:40.033 } 00:17:40.033 ] 00:17:40.033 }' 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.033 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 [2024-12-12 16:14:06.794589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:40.603 [2024-12-12 16:14:06.794696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.603 [2024-12-12 16:14:06.887377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.603 [2024-12-12 16:14:06.887515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.603 [2024-12-12 16:14:06.887533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 88009 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 88009 ']' 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 88009 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.603 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88009 00:17:40.863 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.863 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.863 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88009' 00:17:40.863 killing process with pid 88009 00:17:40.863 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 88009 00:17:40.863 [2024-12-12 16:14:06.971267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.863 16:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 88009 00:17:40.863 [2024-12-12 16:14:06.988100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.803 16:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:41.803 00:17:41.803 real 0m4.963s 00:17:41.803 user 0m7.173s 00:17:41.803 sys 0m0.828s 00:17:41.803 16:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.803 16:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.803 ************************************ 00:17:41.803 END TEST raid_state_function_test_sb_4k 00:17:41.803 ************************************ 00:17:41.803 16:14:08 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:41.803 16:14:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:41.803 16:14:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.803 16:14:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.803 ************************************ 00:17:41.803 START TEST raid_superblock_test_4k 00:17:41.803 ************************************ 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=88274 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 88274 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 88274 ']' 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.803 16:14:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.064 [2024-12-12 16:14:08.225513] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:42.064 [2024-12-12 16:14:08.225702] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88274 ] 00:17:42.064 [2024-12-12 16:14:08.397735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.323 [2024-12-12 16:14:08.511755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.584 [2024-12-12 16:14:08.702972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.584 [2024-12-12 16:14:08.703114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.844 malloc1 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.844 [2024-12-12 16:14:09.094832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:42.844 [2024-12-12 16:14:09.094947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.844 [2024-12-12 16:14:09.095005] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:42.844 [2024-12-12 16:14:09.095040] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.844 [2024-12-12 16:14:09.097147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.844 [2024-12-12 16:14:09.097219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:42.844 pt1 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.844 malloc2 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.844 [2024-12-12 16:14:09.148764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.844 [2024-12-12 16:14:09.148883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.844 [2024-12-12 16:14:09.148957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:42.844 [2024-12-12 16:14:09.148991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.844 [2024-12-12 16:14:09.151311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.844 [2024-12-12 16:14:09.151389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.844 pt2 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.844 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.844 [2024-12-12 16:14:09.160785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.845 [2024-12-12 16:14:09.162655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.845 [2024-12-12 16:14:09.162914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:42.845 [2024-12-12 16:14:09.162966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.845 [2024-12-12 16:14:09.163231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:42.845 [2024-12-12 16:14:09.163429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:42.845 [2024-12-12 16:14:09.163477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:42.845 [2024-12-12 16:14:09.163675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.845 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.105 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.105 "name": "raid_bdev1", 00:17:43.105 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:43.105 "strip_size_kb": 0, 00:17:43.105 "state": "online", 00:17:43.105 "raid_level": "raid1", 00:17:43.105 "superblock": true, 00:17:43.105 "num_base_bdevs": 2, 00:17:43.105 "num_base_bdevs_discovered": 2, 00:17:43.105 "num_base_bdevs_operational": 2, 00:17:43.105 "base_bdevs_list": [ 00:17:43.105 { 00:17:43.105 "name": "pt1", 00:17:43.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.105 "is_configured": true, 00:17:43.105 "data_offset": 256, 00:17:43.105 "data_size": 7936 00:17:43.105 }, 00:17:43.105 { 00:17:43.105 "name": "pt2", 00:17:43.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.105 "is_configured": true, 00:17:43.105 "data_offset": 256, 00:17:43.105 "data_size": 7936 00:17:43.105 } 00:17:43.105 ] 00:17:43.105 }' 00:17:43.105 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.105 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.365 [2024-12-12 16:14:09.592285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.365 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.365 "name": "raid_bdev1", 00:17:43.365 "aliases": [ 00:17:43.365 "b93ec549-db9a-43b6-a757-ed18f81b2a61" 00:17:43.365 ], 00:17:43.365 "product_name": "Raid Volume", 00:17:43.365 "block_size": 4096, 00:17:43.365 "num_blocks": 7936, 00:17:43.365 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:43.365 "assigned_rate_limits": { 00:17:43.365 "rw_ios_per_sec": 0, 00:17:43.365 "rw_mbytes_per_sec": 0, 00:17:43.365 "r_mbytes_per_sec": 0, 00:17:43.365 "w_mbytes_per_sec": 0 00:17:43.365 }, 00:17:43.365 "claimed": false, 00:17:43.365 "zoned": false, 00:17:43.365 "supported_io_types": { 00:17:43.365 "read": true, 00:17:43.365 "write": true, 00:17:43.365 "unmap": false, 00:17:43.365 "flush": false, 00:17:43.365 "reset": true, 00:17:43.365 "nvme_admin": false, 00:17:43.365 "nvme_io": false, 00:17:43.365 "nvme_io_md": false, 00:17:43.365 "write_zeroes": true, 00:17:43.365 "zcopy": false, 00:17:43.365 "get_zone_info": false, 00:17:43.365 "zone_management": false, 00:17:43.365 "zone_append": false, 00:17:43.365 "compare": false, 00:17:43.365 "compare_and_write": false, 00:17:43.365 "abort": false, 00:17:43.365 "seek_hole": false, 00:17:43.365 "seek_data": false, 00:17:43.365 "copy": false, 00:17:43.365 "nvme_iov_md": false 00:17:43.365 }, 00:17:43.365 "memory_domains": [ 00:17:43.365 { 00:17:43.365 "dma_device_id": "system", 00:17:43.365 "dma_device_type": 1 00:17:43.365 }, 00:17:43.365 { 00:17:43.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.365 "dma_device_type": 2 00:17:43.365 }, 00:17:43.365 { 00:17:43.365 "dma_device_id": "system", 00:17:43.365 "dma_device_type": 1 00:17:43.365 }, 00:17:43.365 { 00:17:43.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.365 "dma_device_type": 2 00:17:43.365 } 00:17:43.365 ], 00:17:43.365 "driver_specific": { 00:17:43.365 "raid": { 00:17:43.365 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:43.365 "strip_size_kb": 0, 00:17:43.365 "state": "online", 00:17:43.365 "raid_level": "raid1", 00:17:43.365 "superblock": true, 00:17:43.365 "num_base_bdevs": 2, 00:17:43.366 "num_base_bdevs_discovered": 2, 00:17:43.366 "num_base_bdevs_operational": 2, 00:17:43.366 "base_bdevs_list": [ 00:17:43.366 { 00:17:43.366 "name": "pt1", 00:17:43.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.366 "is_configured": true, 00:17:43.366 "data_offset": 256, 00:17:43.366 "data_size": 7936 00:17:43.366 }, 00:17:43.366 { 00:17:43.366 "name": "pt2", 00:17:43.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.366 "is_configured": true, 00:17:43.366 "data_offset": 256, 00:17:43.366 "data_size": 7936 00:17:43.366 } 00:17:43.366 ] 00:17:43.366 } 00:17:43.366 } 00:17:43.366 }' 00:17:43.366 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.366 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:43.366 pt2' 00:17:43.366 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 [2024-12-12 16:14:09.839861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b93ec549-db9a-43b6-a757-ed18f81b2a61 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b93ec549-db9a-43b6-a757-ed18f81b2a61 ']' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 [2024-12-12 16:14:09.887497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.626 [2024-12-12 16:14:09.887521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.626 [2024-12-12 16:14:09.887605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.626 [2024-12-12 16:14:09.887676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.626 [2024-12-12 16:14:09.887691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 16:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:43.887 16:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.887 [2024-12-12 16:14:10.019304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:43.887 [2024-12-12 16:14:10.021186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:43.887 [2024-12-12 16:14:10.021289] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:43.887 [2024-12-12 16:14:10.021387] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:43.887 [2024-12-12 16:14:10.021438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.887 [2024-12-12 16:14:10.021477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:43.887 request: 00:17:43.887 { 00:17:43.887 "name": "raid_bdev1", 00:17:43.887 "raid_level": "raid1", 00:17:43.887 "base_bdevs": [ 00:17:43.887 "malloc1", 00:17:43.887 "malloc2" 00:17:43.887 ], 00:17:43.887 "superblock": false, 00:17:43.887 "method": "bdev_raid_create", 00:17:43.887 "req_id": 1 00:17:43.887 } 00:17:43.887 Got JSON-RPC error response 00:17:43.887 response: 00:17:43.887 { 00:17:43.887 "code": -17, 00:17:43.887 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:43.887 } 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.887 [2024-12-12 16:14:10.079196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:43.887 [2024-12-12 16:14:10.079296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.887 [2024-12-12 16:14:10.079332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:43.887 [2024-12-12 16:14:10.079362] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.887 [2024-12-12 16:14:10.081477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.887 [2024-12-12 16:14:10.081554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:43.887 [2024-12-12 16:14:10.081647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:43.887 [2024-12-12 16:14:10.081707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:43.887 pt1 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.887 "name": "raid_bdev1", 00:17:43.887 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:43.887 "strip_size_kb": 0, 00:17:43.887 "state": "configuring", 00:17:43.887 "raid_level": "raid1", 00:17:43.887 "superblock": true, 00:17:43.887 "num_base_bdevs": 2, 00:17:43.887 "num_base_bdevs_discovered": 1, 00:17:43.887 "num_base_bdevs_operational": 2, 00:17:43.887 "base_bdevs_list": [ 00:17:43.887 { 00:17:43.887 "name": "pt1", 00:17:43.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.887 "is_configured": true, 00:17:43.887 "data_offset": 256, 00:17:43.887 "data_size": 7936 00:17:43.887 }, 00:17:43.887 { 00:17:43.887 "name": null, 00:17:43.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.887 "is_configured": false, 00:17:43.887 "data_offset": 256, 00:17:43.887 "data_size": 7936 00:17:43.887 } 00:17:43.887 ] 00:17:43.887 }' 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.887 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.458 [2024-12-12 16:14:10.538429] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:44.458 [2024-12-12 16:14:10.538507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.458 [2024-12-12 16:14:10.538530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:44.458 [2024-12-12 16:14:10.538541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.458 [2024-12-12 16:14:10.539004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.458 [2024-12-12 16:14:10.539025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:44.458 [2024-12-12 16:14:10.539114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:44.458 [2024-12-12 16:14:10.539140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:44.458 [2024-12-12 16:14:10.539271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:44.458 [2024-12-12 16:14:10.539283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.458 [2024-12-12 16:14:10.539513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:44.458 [2024-12-12 16:14:10.539690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:44.458 [2024-12-12 16:14:10.539700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:44.458 [2024-12-12 16:14:10.539852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.458 pt2 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.458 "name": "raid_bdev1", 00:17:44.458 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:44.458 "strip_size_kb": 0, 00:17:44.458 "state": "online", 00:17:44.458 "raid_level": "raid1", 00:17:44.458 "superblock": true, 00:17:44.458 "num_base_bdevs": 2, 00:17:44.458 "num_base_bdevs_discovered": 2, 00:17:44.458 "num_base_bdevs_operational": 2, 00:17:44.458 "base_bdevs_list": [ 00:17:44.458 { 00:17:44.458 "name": "pt1", 00:17:44.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.458 "is_configured": true, 00:17:44.458 "data_offset": 256, 00:17:44.458 "data_size": 7936 00:17:44.458 }, 00:17:44.458 { 00:17:44.458 "name": "pt2", 00:17:44.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.458 "is_configured": true, 00:17:44.458 "data_offset": 256, 00:17:44.458 "data_size": 7936 00:17:44.458 } 00:17:44.458 ] 00:17:44.458 }' 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.458 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.718 16:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.718 [2024-12-12 16:14:11.005841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.718 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.718 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.718 "name": "raid_bdev1", 00:17:44.718 "aliases": [ 00:17:44.718 "b93ec549-db9a-43b6-a757-ed18f81b2a61" 00:17:44.718 ], 00:17:44.718 "product_name": "Raid Volume", 00:17:44.718 "block_size": 4096, 00:17:44.718 "num_blocks": 7936, 00:17:44.718 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:44.718 "assigned_rate_limits": { 00:17:44.718 "rw_ios_per_sec": 0, 00:17:44.718 "rw_mbytes_per_sec": 0, 00:17:44.718 "r_mbytes_per_sec": 0, 00:17:44.718 "w_mbytes_per_sec": 0 00:17:44.718 }, 00:17:44.718 "claimed": false, 00:17:44.718 "zoned": false, 00:17:44.718 "supported_io_types": { 00:17:44.718 "read": true, 00:17:44.718 "write": true, 00:17:44.718 "unmap": false, 00:17:44.718 "flush": false, 00:17:44.718 "reset": true, 00:17:44.718 "nvme_admin": false, 00:17:44.718 "nvme_io": false, 00:17:44.718 "nvme_io_md": false, 00:17:44.718 "write_zeroes": true, 00:17:44.718 "zcopy": false, 00:17:44.718 "get_zone_info": false, 00:17:44.718 "zone_management": false, 00:17:44.718 "zone_append": false, 00:17:44.718 "compare": false, 00:17:44.718 "compare_and_write": false, 00:17:44.718 "abort": false, 00:17:44.718 "seek_hole": false, 00:17:44.718 "seek_data": false, 00:17:44.718 "copy": false, 00:17:44.718 "nvme_iov_md": false 00:17:44.718 }, 00:17:44.718 "memory_domains": [ 00:17:44.718 { 00:17:44.718 "dma_device_id": "system", 00:17:44.718 "dma_device_type": 1 00:17:44.718 }, 00:17:44.718 { 00:17:44.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.718 "dma_device_type": 2 00:17:44.718 }, 00:17:44.718 { 00:17:44.718 "dma_device_id": "system", 00:17:44.718 "dma_device_type": 1 00:17:44.718 }, 00:17:44.718 { 00:17:44.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.718 "dma_device_type": 2 00:17:44.718 } 00:17:44.718 ], 00:17:44.718 "driver_specific": { 00:17:44.718 "raid": { 00:17:44.718 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:44.718 "strip_size_kb": 0, 00:17:44.718 "state": "online", 00:17:44.718 "raid_level": "raid1", 00:17:44.718 "superblock": true, 00:17:44.718 "num_base_bdevs": 2, 00:17:44.718 "num_base_bdevs_discovered": 2, 00:17:44.718 "num_base_bdevs_operational": 2, 00:17:44.718 "base_bdevs_list": [ 00:17:44.718 { 00:17:44.719 "name": "pt1", 00:17:44.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.719 "is_configured": true, 00:17:44.719 "data_offset": 256, 00:17:44.719 "data_size": 7936 00:17:44.719 }, 00:17:44.719 { 00:17:44.719 "name": "pt2", 00:17:44.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.719 "is_configured": true, 00:17:44.719 "data_offset": 256, 00:17:44.719 "data_size": 7936 00:17:44.719 } 00:17:44.719 ] 00:17:44.719 } 00:17:44.719 } 00:17:44.719 }' 00:17:44.719 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:44.979 pt2' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 [2024-12-12 16:14:11.245390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b93ec549-db9a-43b6-a757-ed18f81b2a61 '!=' b93ec549-db9a-43b6-a757-ed18f81b2a61 ']' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 [2024-12-12 16:14:11.277162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.239 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.239 "name": "raid_bdev1", 00:17:45.239 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:45.239 "strip_size_kb": 0, 00:17:45.240 "state": "online", 00:17:45.240 "raid_level": "raid1", 00:17:45.240 "superblock": true, 00:17:45.240 "num_base_bdevs": 2, 00:17:45.240 "num_base_bdevs_discovered": 1, 00:17:45.240 "num_base_bdevs_operational": 1, 00:17:45.240 "base_bdevs_list": [ 00:17:45.240 { 00:17:45.240 "name": null, 00:17:45.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.240 "is_configured": false, 00:17:45.240 "data_offset": 0, 00:17:45.240 "data_size": 7936 00:17:45.240 }, 00:17:45.240 { 00:17:45.240 "name": "pt2", 00:17:45.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.240 "is_configured": true, 00:17:45.240 "data_offset": 256, 00:17:45.240 "data_size": 7936 00:17:45.240 } 00:17:45.240 ] 00:17:45.240 }' 00:17:45.240 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.240 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 [2024-12-12 16:14:11.701026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.500 [2024-12-12 16:14:11.701107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.500 [2024-12-12 16:14:11.701197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.500 [2024-12-12 16:14:11.701263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.500 [2024-12-12 16:14:11.701323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 [2024-12-12 16:14:11.769023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:45.500 [2024-12-12 16:14:11.769148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.500 [2024-12-12 16:14:11.769185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:45.500 [2024-12-12 16:14:11.769222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.500 [2024-12-12 16:14:11.771717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.500 [2024-12-12 16:14:11.771807] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:45.500 [2024-12-12 16:14:11.771937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:45.500 [2024-12-12 16:14:11.772015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.500 [2024-12-12 16:14:11.772172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:45.500 [2024-12-12 16:14:11.772220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.500 [2024-12-12 16:14:11.772486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:45.500 [2024-12-12 16:14:11.772700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:45.500 [2024-12-12 16:14:11.772746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:45.500 [2024-12-12 16:14:11.772991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.500 pt2 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.500 "name": "raid_bdev1", 00:17:45.500 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:45.500 "strip_size_kb": 0, 00:17:45.500 "state": "online", 00:17:45.500 "raid_level": "raid1", 00:17:45.500 "superblock": true, 00:17:45.500 "num_base_bdevs": 2, 00:17:45.500 "num_base_bdevs_discovered": 1, 00:17:45.500 "num_base_bdevs_operational": 1, 00:17:45.500 "base_bdevs_list": [ 00:17:45.500 { 00:17:45.500 "name": null, 00:17:45.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.500 "is_configured": false, 00:17:45.500 "data_offset": 256, 00:17:45.500 "data_size": 7936 00:17:45.500 }, 00:17:45.500 { 00:17:45.501 "name": "pt2", 00:17:45.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.501 "is_configured": true, 00:17:45.501 "data_offset": 256, 00:17:45.501 "data_size": 7936 00:17:45.501 } 00:17:45.501 ] 00:17:45.501 }' 00:17:45.501 16:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.501 16:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.071 [2024-12-12 16:14:12.220355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.071 [2024-12-12 16:14:12.220385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.071 [2024-12-12 16:14:12.220433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.071 [2024-12-12 16:14:12.220473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.071 [2024-12-12 16:14:12.220482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.071 [2024-12-12 16:14:12.284282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.071 [2024-12-12 16:14:12.284334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.071 [2024-12-12 16:14:12.284360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:46.071 [2024-12-12 16:14:12.284372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.071 [2024-12-12 16:14:12.286664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.071 [2024-12-12 16:14:12.286704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.071 [2024-12-12 16:14:12.286771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:46.071 [2024-12-12 16:14:12.286820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.071 [2024-12-12 16:14:12.286970] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:46.071 [2024-12-12 16:14:12.286984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.071 [2024-12-12 16:14:12.287000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:46.071 [2024-12-12 16:14:12.287060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.071 [2024-12-12 16:14:12.287150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:46.071 [2024-12-12 16:14:12.287159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.071 [2024-12-12 16:14:12.287391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:46.071 [2024-12-12 16:14:12.287552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:46.071 [2024-12-12 16:14:12.287566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:46.071 [2024-12-12 16:14:12.287722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.071 pt1 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.071 "name": "raid_bdev1", 00:17:46.071 "uuid": "b93ec549-db9a-43b6-a757-ed18f81b2a61", 00:17:46.071 "strip_size_kb": 0, 00:17:46.071 "state": "online", 00:17:46.071 "raid_level": "raid1", 00:17:46.071 "superblock": true, 00:17:46.071 "num_base_bdevs": 2, 00:17:46.071 "num_base_bdevs_discovered": 1, 00:17:46.071 "num_base_bdevs_operational": 1, 00:17:46.071 "base_bdevs_list": [ 00:17:46.071 { 00:17:46.071 "name": null, 00:17:46.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.071 "is_configured": false, 00:17:46.071 "data_offset": 256, 00:17:46.071 "data_size": 7936 00:17:46.071 }, 00:17:46.071 { 00:17:46.071 "name": "pt2", 00:17:46.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.071 "is_configured": true, 00:17:46.071 "data_offset": 256, 00:17:46.071 "data_size": 7936 00:17:46.071 } 00:17:46.071 ] 00:17:46.071 }' 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.071 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.646 [2024-12-12 16:14:12.780180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b93ec549-db9a-43b6-a757-ed18f81b2a61 '!=' b93ec549-db9a-43b6-a757-ed18f81b2a61 ']' 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 88274 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 88274 ']' 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 88274 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88274 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88274' 00:17:46.646 killing process with pid 88274 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 88274 00:17:46.646 [2024-12-12 16:14:12.848040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.646 [2024-12-12 16:14:12.848107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.646 [2024-12-12 16:14:12.848139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.646 [2024-12-12 16:14:12.848154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:46.646 16:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 88274 00:17:46.906 [2024-12-12 16:14:13.060586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.286 16:14:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:48.287 00:17:48.287 real 0m6.070s 00:17:48.287 user 0m9.216s 00:17:48.287 sys 0m1.009s 00:17:48.287 16:14:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.287 ************************************ 00:17:48.287 END TEST raid_superblock_test_4k 00:17:48.287 ************************************ 00:17:48.287 16:14:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.287 16:14:14 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:48.287 16:14:14 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:48.287 16:14:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:48.287 16:14:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.287 16:14:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.287 ************************************ 00:17:48.287 START TEST raid_rebuild_test_sb_4k 00:17:48.287 ************************************ 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=88597 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 88597 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 88597 ']' 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.287 16:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.287 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:48.287 Zero copy mechanism will not be used. 00:17:48.287 [2024-12-12 16:14:14.388214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:48.287 [2024-12-12 16:14:14.388337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88597 ] 00:17:48.287 [2024-12-12 16:14:14.575909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.547 [2024-12-12 16:14:14.708380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.807 [2024-12-12 16:14:14.942263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.807 [2024-12-12 16:14:14.942333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 BaseBdev1_malloc 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 [2024-12-12 16:14:15.238809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:49.067 [2024-12-12 16:14:15.238889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.067 [2024-12-12 16:14:15.238929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:49.067 [2024-12-12 16:14:15.238943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.067 [2024-12-12 16:14:15.241291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.067 [2024-12-12 16:14:15.241333] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.067 BaseBdev1 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 BaseBdev2_malloc 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 [2024-12-12 16:14:15.298777] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:49.067 [2024-12-12 16:14:15.298840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.067 [2024-12-12 16:14:15.298864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:49.067 [2024-12-12 16:14:15.298879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.067 [2024-12-12 16:14:15.301144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.067 [2024-12-12 16:14:15.301180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:49.067 BaseBdev2 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 spare_malloc 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 spare_delay 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 [2024-12-12 16:14:15.404537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.067 [2024-12-12 16:14:15.404599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.067 [2024-12-12 16:14:15.404621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:49.067 [2024-12-12 16:14:15.404635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.067 [2024-12-12 16:14:15.406945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.067 [2024-12-12 16:14:15.406982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.067 spare 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.067 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 [2024-12-12 16:14:15.416584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.325 [2024-12-12 16:14:15.418562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.325 [2024-12-12 16:14:15.418763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:49.325 [2024-12-12 16:14:15.418788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:49.325 [2024-12-12 16:14:15.419037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:49.325 [2024-12-12 16:14:15.419252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:49.325 [2024-12-12 16:14:15.419270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:49.325 [2024-12-12 16:14:15.419422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.325 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.325 "name": "raid_bdev1", 00:17:49.325 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:49.325 "strip_size_kb": 0, 00:17:49.325 "state": "online", 00:17:49.325 "raid_level": "raid1", 00:17:49.325 "superblock": true, 00:17:49.325 "num_base_bdevs": 2, 00:17:49.325 "num_base_bdevs_discovered": 2, 00:17:49.325 "num_base_bdevs_operational": 2, 00:17:49.325 "base_bdevs_list": [ 00:17:49.325 { 00:17:49.325 "name": "BaseBdev1", 00:17:49.325 "uuid": "68cc066c-35a8-5736-9618-44137fa878c5", 00:17:49.325 "is_configured": true, 00:17:49.325 "data_offset": 256, 00:17:49.326 "data_size": 7936 00:17:49.326 }, 00:17:49.326 { 00:17:49.326 "name": "BaseBdev2", 00:17:49.326 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:49.326 "is_configured": true, 00:17:49.326 "data_offset": 256, 00:17:49.326 "data_size": 7936 00:17:49.326 } 00:17:49.326 ] 00:17:49.326 }' 00:17:49.326 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.326 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:49.585 [2024-12-12 16:14:15.868263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:49.585 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.844 16:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:49.844 [2024-12-12 16:14:16.135969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:49.844 /dev/nbd0 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:49.844 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.104 1+0 records in 00:17:50.104 1+0 records out 00:17:50.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416601 s, 9.8 MB/s 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:50.104 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:50.673 7936+0 records in 00:17:50.673 7936+0 records out 00:17:50.673 32505856 bytes (33 MB, 31 MiB) copied, 0.590442 s, 55.1 MB/s 00:17:50.673 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:50.673 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:50.673 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:50.673 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:50.673 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:50.673 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.673 16:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:50.933 [2024-12-12 16:14:17.039906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.933 [2024-12-12 16:14:17.057247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.933 "name": "raid_bdev1", 00:17:50.933 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:50.933 "strip_size_kb": 0, 00:17:50.933 "state": "online", 00:17:50.933 "raid_level": "raid1", 00:17:50.933 "superblock": true, 00:17:50.933 "num_base_bdevs": 2, 00:17:50.933 "num_base_bdevs_discovered": 1, 00:17:50.933 "num_base_bdevs_operational": 1, 00:17:50.933 "base_bdevs_list": [ 00:17:50.933 { 00:17:50.933 "name": null, 00:17:50.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.933 "is_configured": false, 00:17:50.933 "data_offset": 0, 00:17:50.933 "data_size": 7936 00:17:50.933 }, 00:17:50.933 { 00:17:50.933 "name": "BaseBdev2", 00:17:50.933 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:50.933 "is_configured": true, 00:17:50.933 "data_offset": 256, 00:17:50.933 "data_size": 7936 00:17:50.933 } 00:17:50.933 ] 00:17:50.933 }' 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.933 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.192 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.192 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.192 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.192 [2024-12-12 16:14:17.476514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.192 [2024-12-12 16:14:17.491869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:51.192 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.192 16:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:51.192 [2024-12-12 16:14:17.493704] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.573 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.573 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.574 "name": "raid_bdev1", 00:17:52.574 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:52.574 "strip_size_kb": 0, 00:17:52.574 "state": "online", 00:17:52.574 "raid_level": "raid1", 00:17:52.574 "superblock": true, 00:17:52.574 "num_base_bdevs": 2, 00:17:52.574 "num_base_bdevs_discovered": 2, 00:17:52.574 "num_base_bdevs_operational": 2, 00:17:52.574 "process": { 00:17:52.574 "type": "rebuild", 00:17:52.574 "target": "spare", 00:17:52.574 "progress": { 00:17:52.574 "blocks": 2560, 00:17:52.574 "percent": 32 00:17:52.574 } 00:17:52.574 }, 00:17:52.574 "base_bdevs_list": [ 00:17:52.574 { 00:17:52.574 "name": "spare", 00:17:52.574 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:52.574 "is_configured": true, 00:17:52.574 "data_offset": 256, 00:17:52.574 "data_size": 7936 00:17:52.574 }, 00:17:52.574 { 00:17:52.574 "name": "BaseBdev2", 00:17:52.574 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:52.574 "is_configured": true, 00:17:52.574 "data_offset": 256, 00:17:52.574 "data_size": 7936 00:17:52.574 } 00:17:52.574 ] 00:17:52.574 }' 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.574 [2024-12-12 16:14:18.649045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.574 [2024-12-12 16:14:18.698517] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.574 [2024-12-12 16:14:18.698575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.574 [2024-12-12 16:14:18.698590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.574 [2024-12-12 16:14:18.698599] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.574 "name": "raid_bdev1", 00:17:52.574 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:52.574 "strip_size_kb": 0, 00:17:52.574 "state": "online", 00:17:52.574 "raid_level": "raid1", 00:17:52.574 "superblock": true, 00:17:52.574 "num_base_bdevs": 2, 00:17:52.574 "num_base_bdevs_discovered": 1, 00:17:52.574 "num_base_bdevs_operational": 1, 00:17:52.574 "base_bdevs_list": [ 00:17:52.574 { 00:17:52.574 "name": null, 00:17:52.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.574 "is_configured": false, 00:17:52.574 "data_offset": 0, 00:17:52.574 "data_size": 7936 00:17:52.574 }, 00:17:52.574 { 00:17:52.574 "name": "BaseBdev2", 00:17:52.574 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:52.574 "is_configured": true, 00:17:52.574 "data_offset": 256, 00:17:52.574 "data_size": 7936 00:17:52.574 } 00:17:52.574 ] 00:17:52.574 }' 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.574 16:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.143 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.144 "name": "raid_bdev1", 00:17:53.144 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:53.144 "strip_size_kb": 0, 00:17:53.144 "state": "online", 00:17:53.144 "raid_level": "raid1", 00:17:53.144 "superblock": true, 00:17:53.144 "num_base_bdevs": 2, 00:17:53.144 "num_base_bdevs_discovered": 1, 00:17:53.144 "num_base_bdevs_operational": 1, 00:17:53.144 "base_bdevs_list": [ 00:17:53.144 { 00:17:53.144 "name": null, 00:17:53.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.144 "is_configured": false, 00:17:53.144 "data_offset": 0, 00:17:53.144 "data_size": 7936 00:17:53.144 }, 00:17:53.144 { 00:17:53.144 "name": "BaseBdev2", 00:17:53.144 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:53.144 "is_configured": true, 00:17:53.144 "data_offset": 256, 00:17:53.144 "data_size": 7936 00:17:53.144 } 00:17:53.144 ] 00:17:53.144 }' 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.144 [2024-12-12 16:14:19.325440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.144 [2024-12-12 16:14:19.341006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.144 16:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:53.144 [2024-12-12 16:14:19.342834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.083 "name": "raid_bdev1", 00:17:54.083 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:54.083 "strip_size_kb": 0, 00:17:54.083 "state": "online", 00:17:54.083 "raid_level": "raid1", 00:17:54.083 "superblock": true, 00:17:54.083 "num_base_bdevs": 2, 00:17:54.083 "num_base_bdevs_discovered": 2, 00:17:54.083 "num_base_bdevs_operational": 2, 00:17:54.083 "process": { 00:17:54.083 "type": "rebuild", 00:17:54.083 "target": "spare", 00:17:54.083 "progress": { 00:17:54.083 "blocks": 2560, 00:17:54.083 "percent": 32 00:17:54.083 } 00:17:54.083 }, 00:17:54.083 "base_bdevs_list": [ 00:17:54.083 { 00:17:54.083 "name": "spare", 00:17:54.083 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:54.083 "is_configured": true, 00:17:54.083 "data_offset": 256, 00:17:54.083 "data_size": 7936 00:17:54.083 }, 00:17:54.083 { 00:17:54.083 "name": "BaseBdev2", 00:17:54.083 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:54.083 "is_configured": true, 00:17:54.083 "data_offset": 256, 00:17:54.083 "data_size": 7936 00:17:54.083 } 00:17:54.083 ] 00:17:54.083 }' 00:17:54.083 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:54.343 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=688 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.343 "name": "raid_bdev1", 00:17:54.343 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:54.343 "strip_size_kb": 0, 00:17:54.343 "state": "online", 00:17:54.343 "raid_level": "raid1", 00:17:54.343 "superblock": true, 00:17:54.343 "num_base_bdevs": 2, 00:17:54.343 "num_base_bdevs_discovered": 2, 00:17:54.343 "num_base_bdevs_operational": 2, 00:17:54.343 "process": { 00:17:54.343 "type": "rebuild", 00:17:54.343 "target": "spare", 00:17:54.343 "progress": { 00:17:54.343 "blocks": 2816, 00:17:54.343 "percent": 35 00:17:54.343 } 00:17:54.343 }, 00:17:54.343 "base_bdevs_list": [ 00:17:54.343 { 00:17:54.343 "name": "spare", 00:17:54.343 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:54.343 "is_configured": true, 00:17:54.343 "data_offset": 256, 00:17:54.343 "data_size": 7936 00:17:54.343 }, 00:17:54.343 { 00:17:54.343 "name": "BaseBdev2", 00:17:54.343 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:54.343 "is_configured": true, 00:17:54.343 "data_offset": 256, 00:17:54.343 "data_size": 7936 00:17:54.343 } 00:17:54.343 ] 00:17:54.343 }' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.343 16:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.282 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.541 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.541 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.541 "name": "raid_bdev1", 00:17:55.541 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:55.541 "strip_size_kb": 0, 00:17:55.541 "state": "online", 00:17:55.541 "raid_level": "raid1", 00:17:55.541 "superblock": true, 00:17:55.541 "num_base_bdevs": 2, 00:17:55.541 "num_base_bdevs_discovered": 2, 00:17:55.541 "num_base_bdevs_operational": 2, 00:17:55.541 "process": { 00:17:55.541 "type": "rebuild", 00:17:55.541 "target": "spare", 00:17:55.541 "progress": { 00:17:55.541 "blocks": 5632, 00:17:55.541 "percent": 70 00:17:55.541 } 00:17:55.541 }, 00:17:55.541 "base_bdevs_list": [ 00:17:55.541 { 00:17:55.541 "name": "spare", 00:17:55.541 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:55.541 "is_configured": true, 00:17:55.541 "data_offset": 256, 00:17:55.541 "data_size": 7936 00:17:55.541 }, 00:17:55.541 { 00:17:55.541 "name": "BaseBdev2", 00:17:55.541 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:55.541 "is_configured": true, 00:17:55.541 "data_offset": 256, 00:17:55.541 "data_size": 7936 00:17:55.541 } 00:17:55.541 ] 00:17:55.541 }' 00:17:55.541 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.541 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.541 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.541 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.541 16:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.110 [2024-12-12 16:14:22.454936] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:56.110 [2024-12-12 16:14:22.455002] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:56.110 [2024-12-12 16:14:22.455089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.680 "name": "raid_bdev1", 00:17:56.680 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:56.680 "strip_size_kb": 0, 00:17:56.680 "state": "online", 00:17:56.680 "raid_level": "raid1", 00:17:56.680 "superblock": true, 00:17:56.680 "num_base_bdevs": 2, 00:17:56.680 "num_base_bdevs_discovered": 2, 00:17:56.680 "num_base_bdevs_operational": 2, 00:17:56.680 "base_bdevs_list": [ 00:17:56.680 { 00:17:56.680 "name": "spare", 00:17:56.680 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:56.680 "is_configured": true, 00:17:56.680 "data_offset": 256, 00:17:56.680 "data_size": 7936 00:17:56.680 }, 00:17:56.680 { 00:17:56.680 "name": "BaseBdev2", 00:17:56.680 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:56.680 "is_configured": true, 00:17:56.680 "data_offset": 256, 00:17:56.680 "data_size": 7936 00:17:56.680 } 00:17:56.680 ] 00:17:56.680 }' 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.680 "name": "raid_bdev1", 00:17:56.680 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:56.680 "strip_size_kb": 0, 00:17:56.680 "state": "online", 00:17:56.680 "raid_level": "raid1", 00:17:56.680 "superblock": true, 00:17:56.680 "num_base_bdevs": 2, 00:17:56.680 "num_base_bdevs_discovered": 2, 00:17:56.680 "num_base_bdevs_operational": 2, 00:17:56.680 "base_bdevs_list": [ 00:17:56.680 { 00:17:56.680 "name": "spare", 00:17:56.680 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:56.680 "is_configured": true, 00:17:56.680 "data_offset": 256, 00:17:56.680 "data_size": 7936 00:17:56.680 }, 00:17:56.680 { 00:17:56.680 "name": "BaseBdev2", 00:17:56.680 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:56.680 "is_configured": true, 00:17:56.680 "data_offset": 256, 00:17:56.680 "data_size": 7936 00:17:56.680 } 00:17:56.680 ] 00:17:56.680 }' 00:17:56.680 16:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.680 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.680 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.940 "name": "raid_bdev1", 00:17:56.940 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:56.940 "strip_size_kb": 0, 00:17:56.940 "state": "online", 00:17:56.940 "raid_level": "raid1", 00:17:56.940 "superblock": true, 00:17:56.940 "num_base_bdevs": 2, 00:17:56.940 "num_base_bdevs_discovered": 2, 00:17:56.940 "num_base_bdevs_operational": 2, 00:17:56.940 "base_bdevs_list": [ 00:17:56.940 { 00:17:56.940 "name": "spare", 00:17:56.940 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:56.940 "is_configured": true, 00:17:56.940 "data_offset": 256, 00:17:56.940 "data_size": 7936 00:17:56.940 }, 00:17:56.940 { 00:17:56.940 "name": "BaseBdev2", 00:17:56.940 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:56.940 "is_configured": true, 00:17:56.940 "data_offset": 256, 00:17:56.940 "data_size": 7936 00:17:56.940 } 00:17:56.940 ] 00:17:56.940 }' 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.940 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.199 [2024-12-12 16:14:23.487337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.199 [2024-12-12 16:14:23.487369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.199 [2024-12-12 16:14:23.487445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.199 [2024-12-12 16:14:23.487512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.199 [2024-12-12 16:14:23.487525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.199 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:57.459 /dev/nbd0 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.459 1+0 records in 00:17:57.459 1+0 records out 00:17:57.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345224 s, 11.9 MB/s 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.459 16:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:57.719 /dev/nbd1 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.719 1+0 records in 00:17:57.719 1+0 records out 00:17:57.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038751 s, 10.6 MB/s 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.719 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:57.978 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:57.978 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.978 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.978 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.978 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:57.978 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.978 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.237 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.498 [2024-12-12 16:14:24.670365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.498 [2024-12-12 16:14:24.670419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.498 [2024-12-12 16:14:24.670442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:58.498 [2024-12-12 16:14:24.670452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.498 [2024-12-12 16:14:24.672626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.498 [2024-12-12 16:14:24.672664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.498 [2024-12-12 16:14:24.672759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:58.498 [2024-12-12 16:14:24.672809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.498 [2024-12-12 16:14:24.672983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.498 spare 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.498 [2024-12-12 16:14:24.772897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:58.498 [2024-12-12 16:14:24.772927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.498 [2024-12-12 16:14:24.773167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:58.498 [2024-12-12 16:14:24.773355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:58.498 [2024-12-12 16:14:24.773374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:58.498 [2024-12-12 16:14:24.773533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.498 "name": "raid_bdev1", 00:17:58.498 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:58.498 "strip_size_kb": 0, 00:17:58.498 "state": "online", 00:17:58.498 "raid_level": "raid1", 00:17:58.498 "superblock": true, 00:17:58.498 "num_base_bdevs": 2, 00:17:58.498 "num_base_bdevs_discovered": 2, 00:17:58.498 "num_base_bdevs_operational": 2, 00:17:58.498 "base_bdevs_list": [ 00:17:58.498 { 00:17:58.498 "name": "spare", 00:17:58.498 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:58.498 "is_configured": true, 00:17:58.498 "data_offset": 256, 00:17:58.498 "data_size": 7936 00:17:58.498 }, 00:17:58.498 { 00:17:58.498 "name": "BaseBdev2", 00:17:58.498 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:58.498 "is_configured": true, 00:17:58.498 "data_offset": 256, 00:17:58.498 "data_size": 7936 00:17:58.498 } 00:17:58.498 ] 00:17:58.498 }' 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.498 16:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.067 "name": "raid_bdev1", 00:17:59.067 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:59.067 "strip_size_kb": 0, 00:17:59.067 "state": "online", 00:17:59.067 "raid_level": "raid1", 00:17:59.067 "superblock": true, 00:17:59.067 "num_base_bdevs": 2, 00:17:59.067 "num_base_bdevs_discovered": 2, 00:17:59.067 "num_base_bdevs_operational": 2, 00:17:59.067 "base_bdevs_list": [ 00:17:59.067 { 00:17:59.067 "name": "spare", 00:17:59.067 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:17:59.067 "is_configured": true, 00:17:59.067 "data_offset": 256, 00:17:59.067 "data_size": 7936 00:17:59.067 }, 00:17:59.067 { 00:17:59.067 "name": "BaseBdev2", 00:17:59.067 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:59.067 "is_configured": true, 00:17:59.067 "data_offset": 256, 00:17:59.067 "data_size": 7936 00:17:59.067 } 00:17:59.067 ] 00:17:59.067 }' 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.067 [2024-12-12 16:14:25.409136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.067 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.326 "name": "raid_bdev1", 00:17:59.326 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:17:59.326 "strip_size_kb": 0, 00:17:59.326 "state": "online", 00:17:59.326 "raid_level": "raid1", 00:17:59.326 "superblock": true, 00:17:59.326 "num_base_bdevs": 2, 00:17:59.326 "num_base_bdevs_discovered": 1, 00:17:59.326 "num_base_bdevs_operational": 1, 00:17:59.326 "base_bdevs_list": [ 00:17:59.326 { 00:17:59.326 "name": null, 00:17:59.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.326 "is_configured": false, 00:17:59.326 "data_offset": 0, 00:17:59.326 "data_size": 7936 00:17:59.326 }, 00:17:59.326 { 00:17:59.326 "name": "BaseBdev2", 00:17:59.326 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:17:59.326 "is_configured": true, 00:17:59.326 "data_offset": 256, 00:17:59.326 "data_size": 7936 00:17:59.326 } 00:17:59.326 ] 00:17:59.326 }' 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.326 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.586 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.586 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.586 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.586 [2024-12-12 16:14:25.808473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.586 [2024-12-12 16:14:25.808629] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:59.586 [2024-12-12 16:14:25.808648] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:59.586 [2024-12-12 16:14:25.808677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.586 [2024-12-12 16:14:25.824379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:59.586 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.586 16:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:59.586 [2024-12-12 16:14:25.826184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.525 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.784 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.784 "name": "raid_bdev1", 00:18:00.784 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:00.784 "strip_size_kb": 0, 00:18:00.784 "state": "online", 00:18:00.784 "raid_level": "raid1", 00:18:00.784 "superblock": true, 00:18:00.784 "num_base_bdevs": 2, 00:18:00.784 "num_base_bdevs_discovered": 2, 00:18:00.784 "num_base_bdevs_operational": 2, 00:18:00.784 "process": { 00:18:00.784 "type": "rebuild", 00:18:00.785 "target": "spare", 00:18:00.785 "progress": { 00:18:00.785 "blocks": 2560, 00:18:00.785 "percent": 32 00:18:00.785 } 00:18:00.785 }, 00:18:00.785 "base_bdevs_list": [ 00:18:00.785 { 00:18:00.785 "name": "spare", 00:18:00.785 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:18:00.785 "is_configured": true, 00:18:00.785 "data_offset": 256, 00:18:00.785 "data_size": 7936 00:18:00.785 }, 00:18:00.785 { 00:18:00.785 "name": "BaseBdev2", 00:18:00.785 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:00.785 "is_configured": true, 00:18:00.785 "data_offset": 256, 00:18:00.785 "data_size": 7936 00:18:00.785 } 00:18:00.785 ] 00:18:00.785 }' 00:18:00.785 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.785 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.785 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.785 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.785 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:00.785 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.785 16:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.785 [2024-12-12 16:14:26.989979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.785 [2024-12-12 16:14:27.030980] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:00.785 [2024-12-12 16:14:27.031035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.785 [2024-12-12 16:14:27.031049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.785 [2024-12-12 16:14:27.031057] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.785 "name": "raid_bdev1", 00:18:00.785 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:00.785 "strip_size_kb": 0, 00:18:00.785 "state": "online", 00:18:00.785 "raid_level": "raid1", 00:18:00.785 "superblock": true, 00:18:00.785 "num_base_bdevs": 2, 00:18:00.785 "num_base_bdevs_discovered": 1, 00:18:00.785 "num_base_bdevs_operational": 1, 00:18:00.785 "base_bdevs_list": [ 00:18:00.785 { 00:18:00.785 "name": null, 00:18:00.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.785 "is_configured": false, 00:18:00.785 "data_offset": 0, 00:18:00.785 "data_size": 7936 00:18:00.785 }, 00:18:00.785 { 00:18:00.785 "name": "BaseBdev2", 00:18:00.785 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:00.785 "is_configured": true, 00:18:00.785 "data_offset": 256, 00:18:00.785 "data_size": 7936 00:18:00.785 } 00:18:00.785 ] 00:18:00.785 }' 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.785 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.354 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.354 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.354 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.354 [2024-12-12 16:14:27.518926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.354 [2024-12-12 16:14:27.518982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.354 [2024-12-12 16:14:27.519003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:01.354 [2024-12-12 16:14:27.519014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.354 [2024-12-12 16:14:27.519462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.354 [2024-12-12 16:14:27.519483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.354 [2024-12-12 16:14:27.519586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:01.354 [2024-12-12 16:14:27.519604] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.354 [2024-12-12 16:14:27.519613] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:01.354 [2024-12-12 16:14:27.519666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.354 [2024-12-12 16:14:27.535124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:01.354 spare 00:18:01.354 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.354 16:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:01.354 [2024-12-12 16:14:27.536948] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.293 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.293 "name": "raid_bdev1", 00:18:02.293 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:02.293 "strip_size_kb": 0, 00:18:02.293 "state": "online", 00:18:02.293 "raid_level": "raid1", 00:18:02.293 "superblock": true, 00:18:02.293 "num_base_bdevs": 2, 00:18:02.293 "num_base_bdevs_discovered": 2, 00:18:02.293 "num_base_bdevs_operational": 2, 00:18:02.293 "process": { 00:18:02.293 "type": "rebuild", 00:18:02.293 "target": "spare", 00:18:02.293 "progress": { 00:18:02.293 "blocks": 2560, 00:18:02.293 "percent": 32 00:18:02.293 } 00:18:02.293 }, 00:18:02.293 "base_bdevs_list": [ 00:18:02.294 { 00:18:02.294 "name": "spare", 00:18:02.294 "uuid": "d7fe1cc3-ba47-5653-8e48-8c03526de28e", 00:18:02.294 "is_configured": true, 00:18:02.294 "data_offset": 256, 00:18:02.294 "data_size": 7936 00:18:02.294 }, 00:18:02.294 { 00:18:02.294 "name": "BaseBdev2", 00:18:02.294 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:02.294 "is_configured": true, 00:18:02.294 "data_offset": 256, 00:18:02.294 "data_size": 7936 00:18:02.294 } 00:18:02.294 ] 00:18:02.294 }' 00:18:02.294 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.294 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.554 [2024-12-12 16:14:28.701243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.554 [2024-12-12 16:14:28.741637] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.554 [2024-12-12 16:14:28.741686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.554 [2024-12-12 16:14:28.741702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.554 [2024-12-12 16:14:28.741708] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.554 "name": "raid_bdev1", 00:18:02.554 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:02.554 "strip_size_kb": 0, 00:18:02.554 "state": "online", 00:18:02.554 "raid_level": "raid1", 00:18:02.554 "superblock": true, 00:18:02.554 "num_base_bdevs": 2, 00:18:02.554 "num_base_bdevs_discovered": 1, 00:18:02.554 "num_base_bdevs_operational": 1, 00:18:02.554 "base_bdevs_list": [ 00:18:02.554 { 00:18:02.554 "name": null, 00:18:02.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.554 "is_configured": false, 00:18:02.554 "data_offset": 0, 00:18:02.554 "data_size": 7936 00:18:02.554 }, 00:18:02.554 { 00:18:02.554 "name": "BaseBdev2", 00:18:02.554 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:02.554 "is_configured": true, 00:18:02.554 "data_offset": 256, 00:18:02.554 "data_size": 7936 00:18:02.554 } 00:18:02.554 ] 00:18:02.554 }' 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.554 16:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.123 "name": "raid_bdev1", 00:18:03.123 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:03.123 "strip_size_kb": 0, 00:18:03.123 "state": "online", 00:18:03.123 "raid_level": "raid1", 00:18:03.123 "superblock": true, 00:18:03.123 "num_base_bdevs": 2, 00:18:03.123 "num_base_bdevs_discovered": 1, 00:18:03.123 "num_base_bdevs_operational": 1, 00:18:03.123 "base_bdevs_list": [ 00:18:03.123 { 00:18:03.123 "name": null, 00:18:03.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.123 "is_configured": false, 00:18:03.123 "data_offset": 0, 00:18:03.123 "data_size": 7936 00:18:03.123 }, 00:18:03.123 { 00:18:03.123 "name": "BaseBdev2", 00:18:03.123 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:03.123 "is_configured": true, 00:18:03.123 "data_offset": 256, 00:18:03.123 "data_size": 7936 00:18:03.123 } 00:18:03.123 ] 00:18:03.123 }' 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.123 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.123 [2024-12-12 16:14:29.389476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.123 [2024-12-12 16:14:29.389526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.123 [2024-12-12 16:14:29.389546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:03.123 [2024-12-12 16:14:29.389563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.123 [2024-12-12 16:14:29.389996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.123 [2024-12-12 16:14:29.390013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.124 [2024-12-12 16:14:29.390085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:03.124 [2024-12-12 16:14:29.390099] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.124 [2024-12-12 16:14:29.390110] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:03.124 [2024-12-12 16:14:29.390128] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:03.124 BaseBdev1 00:18:03.124 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.124 16:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.061 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.320 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.320 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.320 "name": "raid_bdev1", 00:18:04.320 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:04.320 "strip_size_kb": 0, 00:18:04.320 "state": "online", 00:18:04.320 "raid_level": "raid1", 00:18:04.320 "superblock": true, 00:18:04.320 "num_base_bdevs": 2, 00:18:04.320 "num_base_bdevs_discovered": 1, 00:18:04.320 "num_base_bdevs_operational": 1, 00:18:04.320 "base_bdevs_list": [ 00:18:04.320 { 00:18:04.320 "name": null, 00:18:04.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.320 "is_configured": false, 00:18:04.320 "data_offset": 0, 00:18:04.320 "data_size": 7936 00:18:04.320 }, 00:18:04.320 { 00:18:04.320 "name": "BaseBdev2", 00:18:04.320 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:04.320 "is_configured": true, 00:18:04.320 "data_offset": 256, 00:18:04.320 "data_size": 7936 00:18:04.320 } 00:18:04.320 ] 00:18:04.320 }' 00:18:04.320 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.320 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.579 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.579 "name": "raid_bdev1", 00:18:04.579 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:04.579 "strip_size_kb": 0, 00:18:04.579 "state": "online", 00:18:04.579 "raid_level": "raid1", 00:18:04.579 "superblock": true, 00:18:04.579 "num_base_bdevs": 2, 00:18:04.579 "num_base_bdevs_discovered": 1, 00:18:04.579 "num_base_bdevs_operational": 1, 00:18:04.579 "base_bdevs_list": [ 00:18:04.579 { 00:18:04.579 "name": null, 00:18:04.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.580 "is_configured": false, 00:18:04.580 "data_offset": 0, 00:18:04.580 "data_size": 7936 00:18:04.580 }, 00:18:04.580 { 00:18:04.580 "name": "BaseBdev2", 00:18:04.580 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:04.580 "is_configured": true, 00:18:04.580 "data_offset": 256, 00:18:04.580 "data_size": 7936 00:18:04.580 } 00:18:04.580 ] 00:18:04.580 }' 00:18:04.580 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.839 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.839 16:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.839 [2024-12-12 16:14:31.038870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.839 [2024-12-12 16:14:31.039027] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.839 [2024-12-12 16:14:31.039049] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:04.839 request: 00:18:04.839 { 00:18:04.839 "base_bdev": "BaseBdev1", 00:18:04.839 "raid_bdev": "raid_bdev1", 00:18:04.839 "method": "bdev_raid_add_base_bdev", 00:18:04.839 "req_id": 1 00:18:04.839 } 00:18:04.839 Got JSON-RPC error response 00:18:04.839 response: 00:18:04.839 { 00:18:04.839 "code": -22, 00:18:04.839 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:04.839 } 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.839 16:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.778 "name": "raid_bdev1", 00:18:05.778 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:05.778 "strip_size_kb": 0, 00:18:05.778 "state": "online", 00:18:05.778 "raid_level": "raid1", 00:18:05.778 "superblock": true, 00:18:05.778 "num_base_bdevs": 2, 00:18:05.778 "num_base_bdevs_discovered": 1, 00:18:05.778 "num_base_bdevs_operational": 1, 00:18:05.778 "base_bdevs_list": [ 00:18:05.778 { 00:18:05.778 "name": null, 00:18:05.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.778 "is_configured": false, 00:18:05.778 "data_offset": 0, 00:18:05.778 "data_size": 7936 00:18:05.778 }, 00:18:05.778 { 00:18:05.778 "name": "BaseBdev2", 00:18:05.778 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:05.778 "is_configured": true, 00:18:05.778 "data_offset": 256, 00:18:05.778 "data_size": 7936 00:18:05.778 } 00:18:05.778 ] 00:18:05.778 }' 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.778 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.347 "name": "raid_bdev1", 00:18:06.347 "uuid": "b9f78a46-8883-409f-8823-ae952ef832d6", 00:18:06.347 "strip_size_kb": 0, 00:18:06.347 "state": "online", 00:18:06.347 "raid_level": "raid1", 00:18:06.347 "superblock": true, 00:18:06.347 "num_base_bdevs": 2, 00:18:06.347 "num_base_bdevs_discovered": 1, 00:18:06.347 "num_base_bdevs_operational": 1, 00:18:06.347 "base_bdevs_list": [ 00:18:06.347 { 00:18:06.347 "name": null, 00:18:06.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.347 "is_configured": false, 00:18:06.347 "data_offset": 0, 00:18:06.347 "data_size": 7936 00:18:06.347 }, 00:18:06.347 { 00:18:06.347 "name": "BaseBdev2", 00:18:06.347 "uuid": "b7d23533-67f0-522b-9577-3d4f449e850c", 00:18:06.347 "is_configured": true, 00:18:06.347 "data_offset": 256, 00:18:06.347 "data_size": 7936 00:18:06.347 } 00:18:06.347 ] 00:18:06.347 }' 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 88597 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 88597 ']' 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 88597 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88597 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.347 killing process with pid 88597 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88597' 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 88597 00:18:06.347 Received shutdown signal, test time was about 60.000000 seconds 00:18:06.347 00:18:06.347 Latency(us) 00:18:06.347 [2024-12-12T16:14:32.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.347 [2024-12-12T16:14:32.699Z] =================================================================================================================== 00:18:06.347 [2024-12-12T16:14:32.699Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.347 [2024-12-12 16:14:32.688480] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.347 [2024-12-12 16:14:32.688587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.347 [2024-12-12 16:14:32.688645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.347 [2024-12-12 16:14:32.688657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:06.347 16:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 88597 00:18:06.916 [2024-12-12 16:14:32.972324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.855 16:14:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:07.855 00:18:07.855 real 0m19.723s 00:18:07.855 user 0m25.684s 00:18:07.855 sys 0m2.711s 00:18:07.855 16:14:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.855 16:14:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.855 ************************************ 00:18:07.855 END TEST raid_rebuild_test_sb_4k 00:18:07.855 ************************************ 00:18:07.855 16:14:34 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:07.855 16:14:34 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:07.855 16:14:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:07.855 16:14:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.855 16:14:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.855 ************************************ 00:18:07.855 START TEST raid_state_function_test_sb_md_separate 00:18:07.855 ************************************ 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:07.855 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=89286 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89286' 00:18:07.856 Process raid pid: 89286 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 89286 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89286 ']' 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.856 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.856 [2024-12-12 16:14:34.177145] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:07.856 [2024-12-12 16:14:34.177315] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.115 [2024-12-12 16:14:34.351770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.115 [2024-12-12 16:14:34.460429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.374 [2024-12-12 16:14:34.651311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.374 [2024-12-12 16:14:34.651427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.943 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.943 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:08.943 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:08.943 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.943 16:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.943 [2024-12-12 16:14:35.000023] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.943 [2024-12-12 16:14:35.000129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.943 [2024-12-12 16:14:35.000160] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.943 [2024-12-12 16:14:35.000200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.943 "name": "Existed_Raid", 00:18:08.943 "uuid": "38ba0cc8-06e5-4919-9aa3-7dd4e0ea3c8d", 00:18:08.943 "strip_size_kb": 0, 00:18:08.943 "state": "configuring", 00:18:08.943 "raid_level": "raid1", 00:18:08.943 "superblock": true, 00:18:08.943 "num_base_bdevs": 2, 00:18:08.943 "num_base_bdevs_discovered": 0, 00:18:08.943 "num_base_bdevs_operational": 2, 00:18:08.943 "base_bdevs_list": [ 00:18:08.943 { 00:18:08.943 "name": "BaseBdev1", 00:18:08.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.943 "is_configured": false, 00:18:08.943 "data_offset": 0, 00:18:08.943 "data_size": 0 00:18:08.943 }, 00:18:08.943 { 00:18:08.943 "name": "BaseBdev2", 00:18:08.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.943 "is_configured": false, 00:18:08.943 "data_offset": 0, 00:18:08.943 "data_size": 0 00:18:08.943 } 00:18:08.943 ] 00:18:08.943 }' 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.943 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.203 [2024-12-12 16:14:35.479316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.203 [2024-12-12 16:14:35.479388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.203 [2024-12-12 16:14:35.491305] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.203 [2024-12-12 16:14:35.491345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.203 [2024-12-12 16:14:35.491353] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.203 [2024-12-12 16:14:35.491363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.203 [2024-12-12 16:14:35.539846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.203 BaseBdev1 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.203 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 [ 00:18:09.463 { 00:18:09.463 "name": "BaseBdev1", 00:18:09.463 "aliases": [ 00:18:09.463 "51b1a9e1-e51d-467b-b6ab-a4227432535b" 00:18:09.463 ], 00:18:09.463 "product_name": "Malloc disk", 00:18:09.463 "block_size": 4096, 00:18:09.463 "num_blocks": 8192, 00:18:09.463 "uuid": "51b1a9e1-e51d-467b-b6ab-a4227432535b", 00:18:09.463 "md_size": 32, 00:18:09.463 "md_interleave": false, 00:18:09.463 "dif_type": 0, 00:18:09.463 "assigned_rate_limits": { 00:18:09.463 "rw_ios_per_sec": 0, 00:18:09.463 "rw_mbytes_per_sec": 0, 00:18:09.463 "r_mbytes_per_sec": 0, 00:18:09.463 "w_mbytes_per_sec": 0 00:18:09.463 }, 00:18:09.463 "claimed": true, 00:18:09.463 "claim_type": "exclusive_write", 00:18:09.463 "zoned": false, 00:18:09.463 "supported_io_types": { 00:18:09.463 "read": true, 00:18:09.463 "write": true, 00:18:09.463 "unmap": true, 00:18:09.463 "flush": true, 00:18:09.463 "reset": true, 00:18:09.463 "nvme_admin": false, 00:18:09.463 "nvme_io": false, 00:18:09.463 "nvme_io_md": false, 00:18:09.463 "write_zeroes": true, 00:18:09.463 "zcopy": true, 00:18:09.463 "get_zone_info": false, 00:18:09.463 "zone_management": false, 00:18:09.463 "zone_append": false, 00:18:09.463 "compare": false, 00:18:09.463 "compare_and_write": false, 00:18:09.463 "abort": true, 00:18:09.463 "seek_hole": false, 00:18:09.463 "seek_data": false, 00:18:09.463 "copy": true, 00:18:09.463 "nvme_iov_md": false 00:18:09.463 }, 00:18:09.463 "memory_domains": [ 00:18:09.463 { 00:18:09.463 "dma_device_id": "system", 00:18:09.463 "dma_device_type": 1 00:18:09.463 }, 00:18:09.463 { 00:18:09.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.463 "dma_device_type": 2 00:18:09.463 } 00:18:09.463 ], 00:18:09.463 "driver_specific": {} 00:18:09.463 } 00:18:09.463 ] 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.463 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.463 "name": "Existed_Raid", 00:18:09.463 "uuid": "29262c86-b3b5-4e07-b447-7dc359b4783e", 00:18:09.463 "strip_size_kb": 0, 00:18:09.463 "state": "configuring", 00:18:09.463 "raid_level": "raid1", 00:18:09.463 "superblock": true, 00:18:09.463 "num_base_bdevs": 2, 00:18:09.463 "num_base_bdevs_discovered": 1, 00:18:09.463 "num_base_bdevs_operational": 2, 00:18:09.463 "base_bdevs_list": [ 00:18:09.463 { 00:18:09.463 "name": "BaseBdev1", 00:18:09.463 "uuid": "51b1a9e1-e51d-467b-b6ab-a4227432535b", 00:18:09.463 "is_configured": true, 00:18:09.464 "data_offset": 256, 00:18:09.464 "data_size": 7936 00:18:09.464 }, 00:18:09.464 { 00:18:09.464 "name": "BaseBdev2", 00:18:09.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.464 "is_configured": false, 00:18:09.464 "data_offset": 0, 00:18:09.464 "data_size": 0 00:18:09.464 } 00:18:09.464 ] 00:18:09.464 }' 00:18:09.464 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.464 16:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.724 [2024-12-12 16:14:36.035088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.724 [2024-12-12 16:14:36.035187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.724 [2024-12-12 16:14:36.047109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.724 [2024-12-12 16:14:36.048891] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.724 [2024-12-12 16:14:36.048983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.724 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.984 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.984 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.984 "name": "Existed_Raid", 00:18:09.984 "uuid": "7530a838-8d98-48e5-832d-2ad277675157", 00:18:09.984 "strip_size_kb": 0, 00:18:09.984 "state": "configuring", 00:18:09.984 "raid_level": "raid1", 00:18:09.984 "superblock": true, 00:18:09.984 "num_base_bdevs": 2, 00:18:09.984 "num_base_bdevs_discovered": 1, 00:18:09.984 "num_base_bdevs_operational": 2, 00:18:09.984 "base_bdevs_list": [ 00:18:09.984 { 00:18:09.984 "name": "BaseBdev1", 00:18:09.984 "uuid": "51b1a9e1-e51d-467b-b6ab-a4227432535b", 00:18:09.984 "is_configured": true, 00:18:09.984 "data_offset": 256, 00:18:09.984 "data_size": 7936 00:18:09.984 }, 00:18:09.984 { 00:18:09.984 "name": "BaseBdev2", 00:18:09.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.984 "is_configured": false, 00:18:09.984 "data_offset": 0, 00:18:09.984 "data_size": 0 00:18:09.984 } 00:18:09.984 ] 00:18:09.984 }' 00:18:09.984 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.984 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.244 [2024-12-12 16:14:36.549001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.244 [2024-12-12 16:14:36.549225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:10.244 [2024-12-12 16:14:36.549238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:10.244 [2024-12-12 16:14:36.549314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:10.244 [2024-12-12 16:14:36.549439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:10.244 [2024-12-12 16:14:36.549450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:10.244 [2024-12-12 16:14:36.549549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.244 BaseBdev2 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.244 [ 00:18:10.244 { 00:18:10.244 "name": "BaseBdev2", 00:18:10.244 "aliases": [ 00:18:10.244 "df92fce9-f37f-4e17-916f-f6d0e9a6fcdc" 00:18:10.244 ], 00:18:10.244 "product_name": "Malloc disk", 00:18:10.244 "block_size": 4096, 00:18:10.244 "num_blocks": 8192, 00:18:10.244 "uuid": "df92fce9-f37f-4e17-916f-f6d0e9a6fcdc", 00:18:10.244 "md_size": 32, 00:18:10.244 "md_interleave": false, 00:18:10.244 "dif_type": 0, 00:18:10.244 "assigned_rate_limits": { 00:18:10.244 "rw_ios_per_sec": 0, 00:18:10.244 "rw_mbytes_per_sec": 0, 00:18:10.244 "r_mbytes_per_sec": 0, 00:18:10.244 "w_mbytes_per_sec": 0 00:18:10.244 }, 00:18:10.244 "claimed": true, 00:18:10.244 "claim_type": "exclusive_write", 00:18:10.244 "zoned": false, 00:18:10.244 "supported_io_types": { 00:18:10.244 "read": true, 00:18:10.244 "write": true, 00:18:10.244 "unmap": true, 00:18:10.244 "flush": true, 00:18:10.244 "reset": true, 00:18:10.244 "nvme_admin": false, 00:18:10.244 "nvme_io": false, 00:18:10.244 "nvme_io_md": false, 00:18:10.244 "write_zeroes": true, 00:18:10.244 "zcopy": true, 00:18:10.244 "get_zone_info": false, 00:18:10.244 "zone_management": false, 00:18:10.244 "zone_append": false, 00:18:10.244 "compare": false, 00:18:10.244 "compare_and_write": false, 00:18:10.244 "abort": true, 00:18:10.244 "seek_hole": false, 00:18:10.244 "seek_data": false, 00:18:10.244 "copy": true, 00:18:10.244 "nvme_iov_md": false 00:18:10.244 }, 00:18:10.244 "memory_domains": [ 00:18:10.244 { 00:18:10.244 "dma_device_id": "system", 00:18:10.244 "dma_device_type": 1 00:18:10.244 }, 00:18:10.244 { 00:18:10.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.244 "dma_device_type": 2 00:18:10.244 } 00:18:10.244 ], 00:18:10.244 "driver_specific": {} 00:18:10.244 } 00:18:10.244 ] 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:10.244 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.245 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.504 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.504 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.504 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.504 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.505 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.505 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.505 "name": "Existed_Raid", 00:18:10.505 "uuid": "7530a838-8d98-48e5-832d-2ad277675157", 00:18:10.505 "strip_size_kb": 0, 00:18:10.505 "state": "online", 00:18:10.505 "raid_level": "raid1", 00:18:10.505 "superblock": true, 00:18:10.505 "num_base_bdevs": 2, 00:18:10.505 "num_base_bdevs_discovered": 2, 00:18:10.505 "num_base_bdevs_operational": 2, 00:18:10.505 "base_bdevs_list": [ 00:18:10.505 { 00:18:10.505 "name": "BaseBdev1", 00:18:10.505 "uuid": "51b1a9e1-e51d-467b-b6ab-a4227432535b", 00:18:10.505 "is_configured": true, 00:18:10.505 "data_offset": 256, 00:18:10.505 "data_size": 7936 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "name": "BaseBdev2", 00:18:10.505 "uuid": "df92fce9-f37f-4e17-916f-f6d0e9a6fcdc", 00:18:10.505 "is_configured": true, 00:18:10.505 "data_offset": 256, 00:18:10.505 "data_size": 7936 00:18:10.505 } 00:18:10.505 ] 00:18:10.505 }' 00:18:10.505 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.505 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.765 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:10.765 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:10.765 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.765 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.765 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.765 16:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.765 [2024-12-12 16:14:37.008493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.765 "name": "Existed_Raid", 00:18:10.765 "aliases": [ 00:18:10.765 "7530a838-8d98-48e5-832d-2ad277675157" 00:18:10.765 ], 00:18:10.765 "product_name": "Raid Volume", 00:18:10.765 "block_size": 4096, 00:18:10.765 "num_blocks": 7936, 00:18:10.765 "uuid": "7530a838-8d98-48e5-832d-2ad277675157", 00:18:10.765 "md_size": 32, 00:18:10.765 "md_interleave": false, 00:18:10.765 "dif_type": 0, 00:18:10.765 "assigned_rate_limits": { 00:18:10.765 "rw_ios_per_sec": 0, 00:18:10.765 "rw_mbytes_per_sec": 0, 00:18:10.765 "r_mbytes_per_sec": 0, 00:18:10.765 "w_mbytes_per_sec": 0 00:18:10.765 }, 00:18:10.765 "claimed": false, 00:18:10.765 "zoned": false, 00:18:10.765 "supported_io_types": { 00:18:10.765 "read": true, 00:18:10.765 "write": true, 00:18:10.765 "unmap": false, 00:18:10.765 "flush": false, 00:18:10.765 "reset": true, 00:18:10.765 "nvme_admin": false, 00:18:10.765 "nvme_io": false, 00:18:10.765 "nvme_io_md": false, 00:18:10.765 "write_zeroes": true, 00:18:10.765 "zcopy": false, 00:18:10.765 "get_zone_info": false, 00:18:10.765 "zone_management": false, 00:18:10.765 "zone_append": false, 00:18:10.765 "compare": false, 00:18:10.765 "compare_and_write": false, 00:18:10.765 "abort": false, 00:18:10.765 "seek_hole": false, 00:18:10.765 "seek_data": false, 00:18:10.765 "copy": false, 00:18:10.765 "nvme_iov_md": false 00:18:10.765 }, 00:18:10.765 "memory_domains": [ 00:18:10.765 { 00:18:10.765 "dma_device_id": "system", 00:18:10.765 "dma_device_type": 1 00:18:10.765 }, 00:18:10.765 { 00:18:10.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.765 "dma_device_type": 2 00:18:10.765 }, 00:18:10.765 { 00:18:10.765 "dma_device_id": "system", 00:18:10.765 "dma_device_type": 1 00:18:10.765 }, 00:18:10.765 { 00:18:10.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.765 "dma_device_type": 2 00:18:10.765 } 00:18:10.765 ], 00:18:10.765 "driver_specific": { 00:18:10.765 "raid": { 00:18:10.765 "uuid": "7530a838-8d98-48e5-832d-2ad277675157", 00:18:10.765 "strip_size_kb": 0, 00:18:10.765 "state": "online", 00:18:10.765 "raid_level": "raid1", 00:18:10.765 "superblock": true, 00:18:10.765 "num_base_bdevs": 2, 00:18:10.765 "num_base_bdevs_discovered": 2, 00:18:10.765 "num_base_bdevs_operational": 2, 00:18:10.765 "base_bdevs_list": [ 00:18:10.765 { 00:18:10.765 "name": "BaseBdev1", 00:18:10.765 "uuid": "51b1a9e1-e51d-467b-b6ab-a4227432535b", 00:18:10.765 "is_configured": true, 00:18:10.765 "data_offset": 256, 00:18:10.765 "data_size": 7936 00:18:10.765 }, 00:18:10.765 { 00:18:10.765 "name": "BaseBdev2", 00:18:10.765 "uuid": "df92fce9-f37f-4e17-916f-f6d0e9a6fcdc", 00:18:10.765 "is_configured": true, 00:18:10.765 "data_offset": 256, 00:18:10.765 "data_size": 7936 00:18:10.765 } 00:18:10.765 ] 00:18:10.765 } 00:18:10.765 } 00:18:10.765 }' 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:10.765 BaseBdev2' 00:18:10.765 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.026 [2024-12-12 16:14:37.231927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.026 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.286 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.286 "name": "Existed_Raid", 00:18:11.286 "uuid": "7530a838-8d98-48e5-832d-2ad277675157", 00:18:11.286 "strip_size_kb": 0, 00:18:11.286 "state": "online", 00:18:11.286 "raid_level": "raid1", 00:18:11.286 "superblock": true, 00:18:11.286 "num_base_bdevs": 2, 00:18:11.286 "num_base_bdevs_discovered": 1, 00:18:11.286 "num_base_bdevs_operational": 1, 00:18:11.286 "base_bdevs_list": [ 00:18:11.286 { 00:18:11.286 "name": null, 00:18:11.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.286 "is_configured": false, 00:18:11.286 "data_offset": 0, 00:18:11.286 "data_size": 7936 00:18:11.286 }, 00:18:11.286 { 00:18:11.286 "name": "BaseBdev2", 00:18:11.286 "uuid": "df92fce9-f37f-4e17-916f-f6d0e9a6fcdc", 00:18:11.286 "is_configured": true, 00:18:11.286 "data_offset": 256, 00:18:11.286 "data_size": 7936 00:18:11.286 } 00:18:11.286 ] 00:18:11.286 }' 00:18:11.286 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.286 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.546 [2024-12-12 16:14:37.758002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.546 [2024-12-12 16:14:37.758103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.546 [2024-12-12 16:14:37.853949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.546 [2024-12-12 16:14:37.854001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.546 [2024-12-12 16:14:37.854012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.546 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 89286 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89286 ']' 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 89286 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89286 00:18:11.806 killing process with pid 89286 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89286' 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 89286 00:18:11.806 [2024-12-12 16:14:37.950221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.806 16:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 89286 00:18:11.806 [2024-12-12 16:14:37.966071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.747 16:14:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:12.747 00:18:12.747 real 0m4.943s 00:18:12.747 user 0m7.057s 00:18:12.747 sys 0m0.915s 00:18:12.747 16:14:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.747 16:14:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.747 ************************************ 00:18:12.747 END TEST raid_state_function_test_sb_md_separate 00:18:12.747 ************************************ 00:18:12.747 16:14:39 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:12.747 16:14:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:12.747 16:14:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.747 16:14:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.007 ************************************ 00:18:13.007 START TEST raid_superblock_test_md_separate 00:18:13.007 ************************************ 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=89534 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 89534 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89534 ']' 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.007 16:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.007 [2024-12-12 16:14:39.205760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:13.007 [2024-12-12 16:14:39.205947] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89534 ] 00:18:13.267 [2024-12-12 16:14:39.385496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.267 [2024-12-12 16:14:39.489211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.527 [2024-12-12 16:14:39.675583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.527 [2024-12-12 16:14:39.675718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.787 malloc1 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.787 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.788 [2024-12-12 16:14:40.055717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.788 [2024-12-12 16:14:40.055870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.788 [2024-12-12 16:14:40.055917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:13.788 [2024-12-12 16:14:40.055949] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.788 [2024-12-12 16:14:40.057968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.788 [2024-12-12 16:14:40.058047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.788 pt1 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.788 malloc2 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.788 [2024-12-12 16:14:40.109687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:13.788 [2024-12-12 16:14:40.109741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.788 [2024-12-12 16:14:40.109760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:13.788 [2024-12-12 16:14:40.109768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.788 [2024-12-12 16:14:40.111539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.788 [2024-12-12 16:14:40.111650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:13.788 pt2 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.788 [2024-12-12 16:14:40.121692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.788 [2024-12-12 16:14:40.123424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.788 [2024-12-12 16:14:40.123584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:13.788 [2024-12-12 16:14:40.123599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.788 [2024-12-12 16:14:40.123675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:13.788 [2024-12-12 16:14:40.123787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:13.788 [2024-12-12 16:14:40.123798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:13.788 [2024-12-12 16:14:40.123906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.788 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.048 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.048 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.048 "name": "raid_bdev1", 00:18:14.048 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:14.048 "strip_size_kb": 0, 00:18:14.048 "state": "online", 00:18:14.048 "raid_level": "raid1", 00:18:14.048 "superblock": true, 00:18:14.048 "num_base_bdevs": 2, 00:18:14.048 "num_base_bdevs_discovered": 2, 00:18:14.048 "num_base_bdevs_operational": 2, 00:18:14.048 "base_bdevs_list": [ 00:18:14.048 { 00:18:14.048 "name": "pt1", 00:18:14.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.048 "is_configured": true, 00:18:14.048 "data_offset": 256, 00:18:14.048 "data_size": 7936 00:18:14.048 }, 00:18:14.048 { 00:18:14.048 "name": "pt2", 00:18:14.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.048 "is_configured": true, 00:18:14.048 "data_offset": 256, 00:18:14.048 "data_size": 7936 00:18:14.048 } 00:18:14.048 ] 00:18:14.048 }' 00:18:14.048 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.048 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.308 [2024-12-12 16:14:40.589143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.308 "name": "raid_bdev1", 00:18:14.308 "aliases": [ 00:18:14.308 "f72c065f-32d5-4689-b9c0-e695b72c0d1d" 00:18:14.308 ], 00:18:14.308 "product_name": "Raid Volume", 00:18:14.308 "block_size": 4096, 00:18:14.308 "num_blocks": 7936, 00:18:14.308 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:14.308 "md_size": 32, 00:18:14.308 "md_interleave": false, 00:18:14.308 "dif_type": 0, 00:18:14.308 "assigned_rate_limits": { 00:18:14.308 "rw_ios_per_sec": 0, 00:18:14.308 "rw_mbytes_per_sec": 0, 00:18:14.308 "r_mbytes_per_sec": 0, 00:18:14.308 "w_mbytes_per_sec": 0 00:18:14.308 }, 00:18:14.308 "claimed": false, 00:18:14.308 "zoned": false, 00:18:14.308 "supported_io_types": { 00:18:14.308 "read": true, 00:18:14.308 "write": true, 00:18:14.308 "unmap": false, 00:18:14.308 "flush": false, 00:18:14.308 "reset": true, 00:18:14.308 "nvme_admin": false, 00:18:14.308 "nvme_io": false, 00:18:14.308 "nvme_io_md": false, 00:18:14.308 "write_zeroes": true, 00:18:14.308 "zcopy": false, 00:18:14.308 "get_zone_info": false, 00:18:14.308 "zone_management": false, 00:18:14.308 "zone_append": false, 00:18:14.308 "compare": false, 00:18:14.308 "compare_and_write": false, 00:18:14.308 "abort": false, 00:18:14.308 "seek_hole": false, 00:18:14.308 "seek_data": false, 00:18:14.308 "copy": false, 00:18:14.308 "nvme_iov_md": false 00:18:14.308 }, 00:18:14.308 "memory_domains": [ 00:18:14.308 { 00:18:14.308 "dma_device_id": "system", 00:18:14.308 "dma_device_type": 1 00:18:14.308 }, 00:18:14.308 { 00:18:14.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.308 "dma_device_type": 2 00:18:14.308 }, 00:18:14.308 { 00:18:14.308 "dma_device_id": "system", 00:18:14.308 "dma_device_type": 1 00:18:14.308 }, 00:18:14.308 { 00:18:14.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.308 "dma_device_type": 2 00:18:14.308 } 00:18:14.308 ], 00:18:14.308 "driver_specific": { 00:18:14.308 "raid": { 00:18:14.308 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:14.308 "strip_size_kb": 0, 00:18:14.308 "state": "online", 00:18:14.308 "raid_level": "raid1", 00:18:14.308 "superblock": true, 00:18:14.308 "num_base_bdevs": 2, 00:18:14.308 "num_base_bdevs_discovered": 2, 00:18:14.308 "num_base_bdevs_operational": 2, 00:18:14.308 "base_bdevs_list": [ 00:18:14.308 { 00:18:14.308 "name": "pt1", 00:18:14.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.308 "is_configured": true, 00:18:14.308 "data_offset": 256, 00:18:14.308 "data_size": 7936 00:18:14.308 }, 00:18:14.308 { 00:18:14.308 "name": "pt2", 00:18:14.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.308 "is_configured": true, 00:18:14.308 "data_offset": 256, 00:18:14.308 "data_size": 7936 00:18:14.308 } 00:18:14.308 ] 00:18:14.308 } 00:18:14.308 } 00:18:14.308 }' 00:18:14.308 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:14.569 pt2' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.569 [2024-12-12 16:14:40.808706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f72c065f-32d5-4689-b9c0-e695b72c0d1d 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z f72c065f-32d5-4689-b9c0-e695b72c0d1d ']' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.569 [2024-12-12 16:14:40.848411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.569 [2024-12-12 16:14:40.848476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.569 [2024-12-12 16:14:40.848579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.569 [2024-12-12 16:14:40.848643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.569 [2024-12-12 16:14:40.848731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.569 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.836 [2024-12-12 16:14:40.980199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:14.836 [2024-12-12 16:14:40.981941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:14.836 [2024-12-12 16:14:40.982007] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:14.836 [2024-12-12 16:14:40.982053] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:14.836 [2024-12-12 16:14:40.982068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.836 [2024-12-12 16:14:40.982078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:14.836 request: 00:18:14.836 { 00:18:14.836 "name": "raid_bdev1", 00:18:14.836 "raid_level": "raid1", 00:18:14.836 "base_bdevs": [ 00:18:14.836 "malloc1", 00:18:14.836 "malloc2" 00:18:14.836 ], 00:18:14.836 "superblock": false, 00:18:14.836 "method": "bdev_raid_create", 00:18:14.836 "req_id": 1 00:18:14.836 } 00:18:14.836 Got JSON-RPC error response 00:18:14.836 response: 00:18:14.836 { 00:18:14.836 "code": -17, 00:18:14.836 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:14.836 } 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.836 16:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.836 [2024-12-12 16:14:41.036086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.836 [2024-12-12 16:14:41.036179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.836 [2024-12-12 16:14:41.036209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:14.836 [2024-12-12 16:14:41.036235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.836 [2024-12-12 16:14:41.038096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.836 [2024-12-12 16:14:41.038166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.836 [2024-12-12 16:14:41.038224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:14.836 [2024-12-12 16:14:41.038302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.836 pt1 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.836 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.837 "name": "raid_bdev1", 00:18:14.837 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:14.837 "strip_size_kb": 0, 00:18:14.837 "state": "configuring", 00:18:14.837 "raid_level": "raid1", 00:18:14.837 "superblock": true, 00:18:14.837 "num_base_bdevs": 2, 00:18:14.837 "num_base_bdevs_discovered": 1, 00:18:14.837 "num_base_bdevs_operational": 2, 00:18:14.837 "base_bdevs_list": [ 00:18:14.837 { 00:18:14.837 "name": "pt1", 00:18:14.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.837 "is_configured": true, 00:18:14.837 "data_offset": 256, 00:18:14.837 "data_size": 7936 00:18:14.837 }, 00:18:14.837 { 00:18:14.837 "name": null, 00:18:14.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.837 "is_configured": false, 00:18:14.837 "data_offset": 256, 00:18:14.837 "data_size": 7936 00:18:14.837 } 00:18:14.837 ] 00:18:14.837 }' 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.837 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.441 [2024-12-12 16:14:41.479372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.441 [2024-12-12 16:14:41.479432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.441 [2024-12-12 16:14:41.479449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:15.441 [2024-12-12 16:14:41.479458] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.441 [2024-12-12 16:14:41.479621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.441 [2024-12-12 16:14:41.479661] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.441 [2024-12-12 16:14:41.479714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:15.441 [2024-12-12 16:14:41.479733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.441 [2024-12-12 16:14:41.479824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.441 [2024-12-12 16:14:41.479834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:15.441 [2024-12-12 16:14:41.479901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:15.441 [2024-12-12 16:14:41.480037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.441 [2024-12-12 16:14:41.480045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:15.441 [2024-12-12 16:14:41.480123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.441 pt2 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.441 "name": "raid_bdev1", 00:18:15.441 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:15.441 "strip_size_kb": 0, 00:18:15.441 "state": "online", 00:18:15.441 "raid_level": "raid1", 00:18:15.441 "superblock": true, 00:18:15.441 "num_base_bdevs": 2, 00:18:15.441 "num_base_bdevs_discovered": 2, 00:18:15.441 "num_base_bdevs_operational": 2, 00:18:15.441 "base_bdevs_list": [ 00:18:15.441 { 00:18:15.441 "name": "pt1", 00:18:15.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.441 "is_configured": true, 00:18:15.441 "data_offset": 256, 00:18:15.441 "data_size": 7936 00:18:15.441 }, 00:18:15.441 { 00:18:15.441 "name": "pt2", 00:18:15.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.441 "is_configured": true, 00:18:15.441 "data_offset": 256, 00:18:15.441 "data_size": 7936 00:18:15.441 } 00:18:15.441 ] 00:18:15.441 }' 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.441 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.701 [2024-12-12 16:14:41.950780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.701 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.701 "name": "raid_bdev1", 00:18:15.701 "aliases": [ 00:18:15.701 "f72c065f-32d5-4689-b9c0-e695b72c0d1d" 00:18:15.701 ], 00:18:15.701 "product_name": "Raid Volume", 00:18:15.701 "block_size": 4096, 00:18:15.701 "num_blocks": 7936, 00:18:15.701 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:15.701 "md_size": 32, 00:18:15.701 "md_interleave": false, 00:18:15.701 "dif_type": 0, 00:18:15.701 "assigned_rate_limits": { 00:18:15.701 "rw_ios_per_sec": 0, 00:18:15.701 "rw_mbytes_per_sec": 0, 00:18:15.701 "r_mbytes_per_sec": 0, 00:18:15.701 "w_mbytes_per_sec": 0 00:18:15.701 }, 00:18:15.701 "claimed": false, 00:18:15.701 "zoned": false, 00:18:15.701 "supported_io_types": { 00:18:15.701 "read": true, 00:18:15.701 "write": true, 00:18:15.701 "unmap": false, 00:18:15.701 "flush": false, 00:18:15.701 "reset": true, 00:18:15.701 "nvme_admin": false, 00:18:15.701 "nvme_io": false, 00:18:15.701 "nvme_io_md": false, 00:18:15.701 "write_zeroes": true, 00:18:15.701 "zcopy": false, 00:18:15.701 "get_zone_info": false, 00:18:15.701 "zone_management": false, 00:18:15.701 "zone_append": false, 00:18:15.701 "compare": false, 00:18:15.701 "compare_and_write": false, 00:18:15.701 "abort": false, 00:18:15.701 "seek_hole": false, 00:18:15.701 "seek_data": false, 00:18:15.701 "copy": false, 00:18:15.701 "nvme_iov_md": false 00:18:15.701 }, 00:18:15.701 "memory_domains": [ 00:18:15.701 { 00:18:15.701 "dma_device_id": "system", 00:18:15.701 "dma_device_type": 1 00:18:15.701 }, 00:18:15.701 { 00:18:15.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.701 "dma_device_type": 2 00:18:15.701 }, 00:18:15.701 { 00:18:15.701 "dma_device_id": "system", 00:18:15.701 "dma_device_type": 1 00:18:15.702 }, 00:18:15.702 { 00:18:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.702 "dma_device_type": 2 00:18:15.702 } 00:18:15.702 ], 00:18:15.702 "driver_specific": { 00:18:15.702 "raid": { 00:18:15.702 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:15.702 "strip_size_kb": 0, 00:18:15.702 "state": "online", 00:18:15.702 "raid_level": "raid1", 00:18:15.702 "superblock": true, 00:18:15.702 "num_base_bdevs": 2, 00:18:15.702 "num_base_bdevs_discovered": 2, 00:18:15.702 "num_base_bdevs_operational": 2, 00:18:15.702 "base_bdevs_list": [ 00:18:15.702 { 00:18:15.702 "name": "pt1", 00:18:15.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.702 "is_configured": true, 00:18:15.702 "data_offset": 256, 00:18:15.702 "data_size": 7936 00:18:15.702 }, 00:18:15.702 { 00:18:15.702 "name": "pt2", 00:18:15.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.702 "is_configured": true, 00:18:15.702 "data_offset": 256, 00:18:15.702 "data_size": 7936 00:18:15.702 } 00:18:15.702 ] 00:18:15.702 } 00:18:15.702 } 00:18:15.702 }' 00:18:15.702 16:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.702 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:15.702 pt2' 00:18:15.702 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.702 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:15.702 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.961 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.962 [2024-12-12 16:14:42.150461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' f72c065f-32d5-4689-b9c0-e695b72c0d1d '!=' f72c065f-32d5-4689-b9c0-e695b72c0d1d ']' 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.962 [2024-12-12 16:14:42.194180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.962 "name": "raid_bdev1", 00:18:15.962 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:15.962 "strip_size_kb": 0, 00:18:15.962 "state": "online", 00:18:15.962 "raid_level": "raid1", 00:18:15.962 "superblock": true, 00:18:15.962 "num_base_bdevs": 2, 00:18:15.962 "num_base_bdevs_discovered": 1, 00:18:15.962 "num_base_bdevs_operational": 1, 00:18:15.962 "base_bdevs_list": [ 00:18:15.962 { 00:18:15.962 "name": null, 00:18:15.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.962 "is_configured": false, 00:18:15.962 "data_offset": 0, 00:18:15.962 "data_size": 7936 00:18:15.962 }, 00:18:15.962 { 00:18:15.962 "name": "pt2", 00:18:15.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.962 "is_configured": true, 00:18:15.962 "data_offset": 256, 00:18:15.962 "data_size": 7936 00:18:15.962 } 00:18:15.962 ] 00:18:15.962 }' 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.962 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 [2024-12-12 16:14:42.657336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.532 [2024-12-12 16:14:42.657408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.532 [2024-12-12 16:14:42.657493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.532 [2024-12-12 16:14:42.657545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.532 [2024-12-12 16:14:42.657577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 [2024-12-12 16:14:42.733211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.532 [2024-12-12 16:14:42.733258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.532 [2024-12-12 16:14:42.733271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:16.532 [2024-12-12 16:14:42.733281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.532 [2024-12-12 16:14:42.735081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.532 [2024-12-12 16:14:42.735176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.532 [2024-12-12 16:14:42.735222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:16.532 [2024-12-12 16:14:42.735276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.532 [2024-12-12 16:14:42.735361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:16.532 [2024-12-12 16:14:42.735373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.532 [2024-12-12 16:14:42.735441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:16.532 [2024-12-12 16:14:42.735546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:16.532 [2024-12-12 16:14:42.735553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:16.532 [2024-12-12 16:14:42.735650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.532 pt2 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.532 "name": "raid_bdev1", 00:18:16.532 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:16.532 "strip_size_kb": 0, 00:18:16.532 "state": "online", 00:18:16.532 "raid_level": "raid1", 00:18:16.532 "superblock": true, 00:18:16.532 "num_base_bdevs": 2, 00:18:16.532 "num_base_bdevs_discovered": 1, 00:18:16.532 "num_base_bdevs_operational": 1, 00:18:16.532 "base_bdevs_list": [ 00:18:16.532 { 00:18:16.532 "name": null, 00:18:16.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.532 "is_configured": false, 00:18:16.532 "data_offset": 256, 00:18:16.532 "data_size": 7936 00:18:16.532 }, 00:18:16.532 { 00:18:16.532 "name": "pt2", 00:18:16.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.532 "is_configured": true, 00:18:16.532 "data_offset": 256, 00:18:16.532 "data_size": 7936 00:18:16.532 } 00:18:16.532 ] 00:18:16.532 }' 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.532 16:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.102 [2024-12-12 16:14:43.152468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.102 [2024-12-12 16:14:43.152542] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.102 [2024-12-12 16:14:43.152619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.102 [2024-12-12 16:14:43.152673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.102 [2024-12-12 16:14:43.152703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.102 [2024-12-12 16:14:43.212401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.102 [2024-12-12 16:14:43.212491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.102 [2024-12-12 16:14:43.212526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:17.102 [2024-12-12 16:14:43.212553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.102 [2024-12-12 16:14:43.214381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.102 [2024-12-12 16:14:43.214463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.102 [2024-12-12 16:14:43.214527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:17.102 [2024-12-12 16:14:43.214578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.102 [2024-12-12 16:14:43.214697] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:17.102 [2024-12-12 16:14:43.214746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.102 [2024-12-12 16:14:43.214781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:17.102 [2024-12-12 16:14:43.214889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.102 [2024-12-12 16:14:43.214995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:17.102 [2024-12-12 16:14:43.215031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:17.102 [2024-12-12 16:14:43.215100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:17.102 [2024-12-12 16:14:43.215231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:17.102 [2024-12-12 16:14:43.215268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:17.102 [2024-12-12 16:14:43.215401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.102 pt1 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.102 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.102 "name": "raid_bdev1", 00:18:17.102 "uuid": "f72c065f-32d5-4689-b9c0-e695b72c0d1d", 00:18:17.102 "strip_size_kb": 0, 00:18:17.102 "state": "online", 00:18:17.102 "raid_level": "raid1", 00:18:17.102 "superblock": true, 00:18:17.102 "num_base_bdevs": 2, 00:18:17.102 "num_base_bdevs_discovered": 1, 00:18:17.102 "num_base_bdevs_operational": 1, 00:18:17.102 "base_bdevs_list": [ 00:18:17.102 { 00:18:17.102 "name": null, 00:18:17.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.102 "is_configured": false, 00:18:17.102 "data_offset": 256, 00:18:17.102 "data_size": 7936 00:18:17.102 }, 00:18:17.102 { 00:18:17.102 "name": "pt2", 00:18:17.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.102 "is_configured": true, 00:18:17.103 "data_offset": 256, 00:18:17.103 "data_size": 7936 00:18:17.103 } 00:18:17.103 ] 00:18:17.103 }' 00:18:17.103 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.103 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.362 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:17.362 [2024-12-12 16:14:43.699947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' f72c065f-32d5-4689-b9c0-e695b72c0d1d '!=' f72c065f-32d5-4689-b9c0-e695b72c0d1d ']' 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 89534 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89534 ']' 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 89534 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89534 00:18:17.622 killing process with pid 89534 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89534' 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 89534 00:18:17.622 [2024-12-12 16:14:43.774607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.622 [2024-12-12 16:14:43.774665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.622 [2024-12-12 16:14:43.774699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.622 [2024-12-12 16:14:43.774713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:17.622 16:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 89534 00:18:17.881 [2024-12-12 16:14:43.977450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.821 16:14:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:18.821 00:18:18.821 real 0m5.910s 00:18:18.821 user 0m8.950s 00:18:18.821 sys 0m1.122s 00:18:18.821 16:14:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.821 16:14:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.821 ************************************ 00:18:18.821 END TEST raid_superblock_test_md_separate 00:18:18.821 ************************************ 00:18:18.821 16:14:45 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:18.821 16:14:45 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:18.821 16:14:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:18.821 16:14:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.821 16:14:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.821 ************************************ 00:18:18.821 START TEST raid_rebuild_test_sb_md_separate 00:18:18.821 ************************************ 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=89857 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 89857 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89857 ']' 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.821 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.080 [2024-12-12 16:14:45.198036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:19.080 [2024-12-12 16:14:45.198233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89857 ] 00:18:19.080 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:19.080 Zero copy mechanism will not be used. 00:18:19.080 [2024-12-12 16:14:45.370122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.339 [2024-12-12 16:14:45.484072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.339 [2024-12-12 16:14:45.676053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.340 [2024-12-12 16:14:45.676172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.909 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.909 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:19.909 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:19.909 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:19.909 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.909 16:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.909 BaseBdev1_malloc 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.909 [2024-12-12 16:14:46.046693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:19.909 [2024-12-12 16:14:46.046763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.909 [2024-12-12 16:14:46.046784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:19.909 [2024-12-12 16:14:46.046795] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.909 [2024-12-12 16:14:46.048666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.909 [2024-12-12 16:14:46.048743] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:19.909 BaseBdev1 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.909 BaseBdev2_malloc 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.909 [2024-12-12 16:14:46.102250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:19.909 [2024-12-12 16:14:46.102364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.909 [2024-12-12 16:14:46.102399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:19.909 [2024-12-12 16:14:46.102427] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.909 [2024-12-12 16:14:46.104232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.909 [2024-12-12 16:14:46.104321] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:19.909 BaseBdev2 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.909 spare_malloc 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.909 spare_delay 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.909 [2024-12-12 16:14:46.192989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.909 [2024-12-12 16:14:46.193047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.909 [2024-12-12 16:14:46.193066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:19.909 [2024-12-12 16:14:46.193076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.909 [2024-12-12 16:14:46.194831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.909 [2024-12-12 16:14:46.194872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.909 spare 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.909 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.910 [2024-12-12 16:14:46.205028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.910 [2024-12-12 16:14:46.206685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:19.910 [2024-12-12 16:14:46.206865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:19.910 [2024-12-12 16:14:46.206881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:19.910 [2024-12-12 16:14:46.206963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:19.910 [2024-12-12 16:14:46.207089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:19.910 [2024-12-12 16:14:46.207108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:19.910 [2024-12-12 16:14:46.207197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.910 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.168 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.168 "name": "raid_bdev1", 00:18:20.168 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:20.168 "strip_size_kb": 0, 00:18:20.168 "state": "online", 00:18:20.168 "raid_level": "raid1", 00:18:20.168 "superblock": true, 00:18:20.168 "num_base_bdevs": 2, 00:18:20.168 "num_base_bdevs_discovered": 2, 00:18:20.168 "num_base_bdevs_operational": 2, 00:18:20.168 "base_bdevs_list": [ 00:18:20.168 { 00:18:20.168 "name": "BaseBdev1", 00:18:20.168 "uuid": "45993574-1779-5f5d-8cb8-302feef8a370", 00:18:20.168 "is_configured": true, 00:18:20.168 "data_offset": 256, 00:18:20.168 "data_size": 7936 00:18:20.168 }, 00:18:20.168 { 00:18:20.168 "name": "BaseBdev2", 00:18:20.168 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:20.169 "is_configured": true, 00:18:20.169 "data_offset": 256, 00:18:20.169 "data_size": 7936 00:18:20.169 } 00:18:20.169 ] 00:18:20.169 }' 00:18:20.169 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.169 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.427 [2024-12-12 16:14:46.692378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:20.427 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:20.687 [2024-12-12 16:14:46.951849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:20.687 /dev/nbd0 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:20.687 16:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:20.687 1+0 records in 00:18:20.687 1+0 records out 00:18:20.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048013 s, 8.5 MB/s 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:20.687 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:21.626 7936+0 records in 00:18:21.626 7936+0 records out 00:18:21.626 32505856 bytes (33 MB, 31 MiB) copied, 0.633206 s, 51.3 MB/s 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:21.626 [2024-12-12 16:14:47.874949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.626 [2024-12-12 16:14:47.907956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.626 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.626 "name": "raid_bdev1", 00:18:21.626 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:21.626 "strip_size_kb": 0, 00:18:21.626 "state": "online", 00:18:21.626 "raid_level": "raid1", 00:18:21.626 "superblock": true, 00:18:21.627 "num_base_bdevs": 2, 00:18:21.627 "num_base_bdevs_discovered": 1, 00:18:21.627 "num_base_bdevs_operational": 1, 00:18:21.627 "base_bdevs_list": [ 00:18:21.627 { 00:18:21.627 "name": null, 00:18:21.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.627 "is_configured": false, 00:18:21.627 "data_offset": 0, 00:18:21.627 "data_size": 7936 00:18:21.627 }, 00:18:21.627 { 00:18:21.627 "name": "BaseBdev2", 00:18:21.627 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:21.627 "is_configured": true, 00:18:21.627 "data_offset": 256, 00:18:21.627 "data_size": 7936 00:18:21.627 } 00:18:21.627 ] 00:18:21.627 }' 00:18:21.627 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.627 16:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 16:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:22.196 16:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.196 16:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 [2024-12-12 16:14:48.343239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.196 [2024-12-12 16:14:48.358182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:22.196 16:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.196 16:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:22.196 [2024-12-12 16:14:48.359991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.134 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.134 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.135 "name": "raid_bdev1", 00:18:23.135 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:23.135 "strip_size_kb": 0, 00:18:23.135 "state": "online", 00:18:23.135 "raid_level": "raid1", 00:18:23.135 "superblock": true, 00:18:23.135 "num_base_bdevs": 2, 00:18:23.135 "num_base_bdevs_discovered": 2, 00:18:23.135 "num_base_bdevs_operational": 2, 00:18:23.135 "process": { 00:18:23.135 "type": "rebuild", 00:18:23.135 "target": "spare", 00:18:23.135 "progress": { 00:18:23.135 "blocks": 2560, 00:18:23.135 "percent": 32 00:18:23.135 } 00:18:23.135 }, 00:18:23.135 "base_bdevs_list": [ 00:18:23.135 { 00:18:23.135 "name": "spare", 00:18:23.135 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:23.135 "is_configured": true, 00:18:23.135 "data_offset": 256, 00:18:23.135 "data_size": 7936 00:18:23.135 }, 00:18:23.135 { 00:18:23.135 "name": "BaseBdev2", 00:18:23.135 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:23.135 "is_configured": true, 00:18:23.135 "data_offset": 256, 00:18:23.135 "data_size": 7936 00:18:23.135 } 00:18:23.135 ] 00:18:23.135 }' 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.135 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.394 [2024-12-12 16:14:49.504530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.394 [2024-12-12 16:14:49.564984] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:23.394 [2024-12-12 16:14:49.565053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.394 [2024-12-12 16:14:49.565067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.394 [2024-12-12 16:14:49.565078] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.394 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.395 "name": "raid_bdev1", 00:18:23.395 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:23.395 "strip_size_kb": 0, 00:18:23.395 "state": "online", 00:18:23.395 "raid_level": "raid1", 00:18:23.395 "superblock": true, 00:18:23.395 "num_base_bdevs": 2, 00:18:23.395 "num_base_bdevs_discovered": 1, 00:18:23.395 "num_base_bdevs_operational": 1, 00:18:23.395 "base_bdevs_list": [ 00:18:23.395 { 00:18:23.395 "name": null, 00:18:23.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.395 "is_configured": false, 00:18:23.395 "data_offset": 0, 00:18:23.395 "data_size": 7936 00:18:23.395 }, 00:18:23.395 { 00:18:23.395 "name": "BaseBdev2", 00:18:23.395 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:23.395 "is_configured": true, 00:18:23.395 "data_offset": 256, 00:18:23.395 "data_size": 7936 00:18:23.395 } 00:18:23.395 ] 00:18:23.395 }' 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.395 16:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.964 "name": "raid_bdev1", 00:18:23.964 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:23.964 "strip_size_kb": 0, 00:18:23.964 "state": "online", 00:18:23.964 "raid_level": "raid1", 00:18:23.964 "superblock": true, 00:18:23.964 "num_base_bdevs": 2, 00:18:23.964 "num_base_bdevs_discovered": 1, 00:18:23.964 "num_base_bdevs_operational": 1, 00:18:23.964 "base_bdevs_list": [ 00:18:23.964 { 00:18:23.964 "name": null, 00:18:23.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.964 "is_configured": false, 00:18:23.964 "data_offset": 0, 00:18:23.964 "data_size": 7936 00:18:23.964 }, 00:18:23.964 { 00:18:23.964 "name": "BaseBdev2", 00:18:23.964 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:23.964 "is_configured": true, 00:18:23.964 "data_offset": 256, 00:18:23.964 "data_size": 7936 00:18:23.964 } 00:18:23.964 ] 00:18:23.964 }' 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 [2024-12-12 16:14:50.207780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.964 [2024-12-12 16:14:50.221349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.964 16:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:23.964 [2024-12-12 16:14:50.223171] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.902 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.162 "name": "raid_bdev1", 00:18:25.162 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:25.162 "strip_size_kb": 0, 00:18:25.162 "state": "online", 00:18:25.162 "raid_level": "raid1", 00:18:25.162 "superblock": true, 00:18:25.162 "num_base_bdevs": 2, 00:18:25.162 "num_base_bdevs_discovered": 2, 00:18:25.162 "num_base_bdevs_operational": 2, 00:18:25.162 "process": { 00:18:25.162 "type": "rebuild", 00:18:25.162 "target": "spare", 00:18:25.162 "progress": { 00:18:25.162 "blocks": 2560, 00:18:25.162 "percent": 32 00:18:25.162 } 00:18:25.162 }, 00:18:25.162 "base_bdevs_list": [ 00:18:25.162 { 00:18:25.162 "name": "spare", 00:18:25.162 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:25.162 "is_configured": true, 00:18:25.162 "data_offset": 256, 00:18:25.162 "data_size": 7936 00:18:25.162 }, 00:18:25.162 { 00:18:25.162 "name": "BaseBdev2", 00:18:25.162 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:25.162 "is_configured": true, 00:18:25.162 "data_offset": 256, 00:18:25.162 "data_size": 7936 00:18:25.162 } 00:18:25.162 ] 00:18:25.162 }' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:25.162 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=719 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.162 "name": "raid_bdev1", 00:18:25.162 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:25.162 "strip_size_kb": 0, 00:18:25.162 "state": "online", 00:18:25.162 "raid_level": "raid1", 00:18:25.162 "superblock": true, 00:18:25.162 "num_base_bdevs": 2, 00:18:25.162 "num_base_bdevs_discovered": 2, 00:18:25.162 "num_base_bdevs_operational": 2, 00:18:25.162 "process": { 00:18:25.162 "type": "rebuild", 00:18:25.162 "target": "spare", 00:18:25.162 "progress": { 00:18:25.162 "blocks": 2816, 00:18:25.162 "percent": 35 00:18:25.162 } 00:18:25.162 }, 00:18:25.162 "base_bdevs_list": [ 00:18:25.162 { 00:18:25.162 "name": "spare", 00:18:25.162 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:25.162 "is_configured": true, 00:18:25.162 "data_offset": 256, 00:18:25.162 "data_size": 7936 00:18:25.162 }, 00:18:25.162 { 00:18:25.162 "name": "BaseBdev2", 00:18:25.162 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:25.162 "is_configured": true, 00:18:25.162 "data_offset": 256, 00:18:25.162 "data_size": 7936 00:18:25.162 } 00:18:25.162 ] 00:18:25.162 }' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.162 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.163 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.163 16:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.542 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.542 "name": "raid_bdev1", 00:18:26.542 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:26.542 "strip_size_kb": 0, 00:18:26.542 "state": "online", 00:18:26.542 "raid_level": "raid1", 00:18:26.542 "superblock": true, 00:18:26.542 "num_base_bdevs": 2, 00:18:26.542 "num_base_bdevs_discovered": 2, 00:18:26.542 "num_base_bdevs_operational": 2, 00:18:26.542 "process": { 00:18:26.543 "type": "rebuild", 00:18:26.543 "target": "spare", 00:18:26.543 "progress": { 00:18:26.543 "blocks": 5632, 00:18:26.543 "percent": 70 00:18:26.543 } 00:18:26.543 }, 00:18:26.543 "base_bdevs_list": [ 00:18:26.543 { 00:18:26.543 "name": "spare", 00:18:26.543 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:26.543 "is_configured": true, 00:18:26.543 "data_offset": 256, 00:18:26.543 "data_size": 7936 00:18:26.543 }, 00:18:26.543 { 00:18:26.543 "name": "BaseBdev2", 00:18:26.543 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:26.543 "is_configured": true, 00:18:26.543 "data_offset": 256, 00:18:26.543 "data_size": 7936 00:18:26.543 } 00:18:26.543 ] 00:18:26.543 }' 00:18:26.543 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.543 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.543 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.543 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.543 16:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.110 [2024-12-12 16:14:53.335585] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:27.110 [2024-12-12 16:14:53.335753] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:27.110 [2024-12-12 16:14:53.335869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.369 "name": "raid_bdev1", 00:18:27.369 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:27.369 "strip_size_kb": 0, 00:18:27.369 "state": "online", 00:18:27.369 "raid_level": "raid1", 00:18:27.369 "superblock": true, 00:18:27.369 "num_base_bdevs": 2, 00:18:27.369 "num_base_bdevs_discovered": 2, 00:18:27.369 "num_base_bdevs_operational": 2, 00:18:27.369 "base_bdevs_list": [ 00:18:27.369 { 00:18:27.369 "name": "spare", 00:18:27.369 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:27.369 "is_configured": true, 00:18:27.369 "data_offset": 256, 00:18:27.369 "data_size": 7936 00:18:27.369 }, 00:18:27.369 { 00:18:27.369 "name": "BaseBdev2", 00:18:27.369 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:27.369 "is_configured": true, 00:18:27.369 "data_offset": 256, 00:18:27.369 "data_size": 7936 00:18:27.369 } 00:18:27.369 ] 00:18:27.369 }' 00:18:27.369 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.631 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:27.631 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.631 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.632 "name": "raid_bdev1", 00:18:27.632 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:27.632 "strip_size_kb": 0, 00:18:27.632 "state": "online", 00:18:27.632 "raid_level": "raid1", 00:18:27.632 "superblock": true, 00:18:27.632 "num_base_bdevs": 2, 00:18:27.632 "num_base_bdevs_discovered": 2, 00:18:27.632 "num_base_bdevs_operational": 2, 00:18:27.632 "base_bdevs_list": [ 00:18:27.632 { 00:18:27.632 "name": "spare", 00:18:27.632 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:27.632 "is_configured": true, 00:18:27.632 "data_offset": 256, 00:18:27.632 "data_size": 7936 00:18:27.632 }, 00:18:27.632 { 00:18:27.632 "name": "BaseBdev2", 00:18:27.632 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:27.632 "is_configured": true, 00:18:27.632 "data_offset": 256, 00:18:27.632 "data_size": 7936 00:18:27.632 } 00:18:27.632 ] 00:18:27.632 }' 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.632 "name": "raid_bdev1", 00:18:27.632 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:27.632 "strip_size_kb": 0, 00:18:27.632 "state": "online", 00:18:27.632 "raid_level": "raid1", 00:18:27.632 "superblock": true, 00:18:27.632 "num_base_bdevs": 2, 00:18:27.632 "num_base_bdevs_discovered": 2, 00:18:27.632 "num_base_bdevs_operational": 2, 00:18:27.632 "base_bdevs_list": [ 00:18:27.632 { 00:18:27.632 "name": "spare", 00:18:27.632 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:27.632 "is_configured": true, 00:18:27.632 "data_offset": 256, 00:18:27.632 "data_size": 7936 00:18:27.632 }, 00:18:27.632 { 00:18:27.632 "name": "BaseBdev2", 00:18:27.632 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:27.632 "is_configured": true, 00:18:27.632 "data_offset": 256, 00:18:27.632 "data_size": 7936 00:18:27.632 } 00:18:27.632 ] 00:18:27.632 }' 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.632 16:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.201 [2024-12-12 16:14:54.400967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.201 [2024-12-12 16:14:54.401057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.201 [2024-12-12 16:14:54.401168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.201 [2024-12-12 16:14:54.401275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.201 [2024-12-12 16:14:54.401372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:28.201 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:28.460 /dev/nbd0 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:28.460 1+0 records in 00:18:28.460 1+0 records out 00:18:28.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513327 s, 8.0 MB/s 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:28.460 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:28.720 /dev/nbd1 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:28.720 1+0 records in 00:18:28.720 1+0 records out 00:18:28.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268181 s, 15.3 MB/s 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:28.720 16:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.980 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.239 [2024-12-12 16:14:55.561021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:29.239 [2024-12-12 16:14:55.561075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.239 [2024-12-12 16:14:55.561097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:29.239 [2024-12-12 16:14:55.561106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.239 [2024-12-12 16:14:55.563045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.239 [2024-12-12 16:14:55.563091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:29.239 [2024-12-12 16:14:55.563156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:29.239 [2024-12-12 16:14:55.563203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.239 [2024-12-12 16:14:55.563351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.239 spare 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.239 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.499 [2024-12-12 16:14:55.663242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:29.499 [2024-12-12 16:14:55.663272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:29.499 [2024-12-12 16:14:55.663357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:29.499 [2024-12-12 16:14:55.663484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:29.499 [2024-12-12 16:14:55.663492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:29.499 [2024-12-12 16:14:55.663596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.499 "name": "raid_bdev1", 00:18:29.499 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:29.499 "strip_size_kb": 0, 00:18:29.499 "state": "online", 00:18:29.499 "raid_level": "raid1", 00:18:29.499 "superblock": true, 00:18:29.499 "num_base_bdevs": 2, 00:18:29.499 "num_base_bdevs_discovered": 2, 00:18:29.499 "num_base_bdevs_operational": 2, 00:18:29.499 "base_bdevs_list": [ 00:18:29.499 { 00:18:29.499 "name": "spare", 00:18:29.499 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:29.499 "is_configured": true, 00:18:29.499 "data_offset": 256, 00:18:29.499 "data_size": 7936 00:18:29.499 }, 00:18:29.499 { 00:18:29.499 "name": "BaseBdev2", 00:18:29.499 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:29.499 "is_configured": true, 00:18:29.499 "data_offset": 256, 00:18:29.499 "data_size": 7936 00:18:29.499 } 00:18:29.499 ] 00:18:29.499 }' 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.499 16:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.068 "name": "raid_bdev1", 00:18:30.068 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:30.068 "strip_size_kb": 0, 00:18:30.068 "state": "online", 00:18:30.068 "raid_level": "raid1", 00:18:30.068 "superblock": true, 00:18:30.068 "num_base_bdevs": 2, 00:18:30.068 "num_base_bdevs_discovered": 2, 00:18:30.068 "num_base_bdevs_operational": 2, 00:18:30.068 "base_bdevs_list": [ 00:18:30.068 { 00:18:30.068 "name": "spare", 00:18:30.068 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:30.068 "is_configured": true, 00:18:30.068 "data_offset": 256, 00:18:30.068 "data_size": 7936 00:18:30.068 }, 00:18:30.068 { 00:18:30.068 "name": "BaseBdev2", 00:18:30.068 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:30.068 "is_configured": true, 00:18:30.068 "data_offset": 256, 00:18:30.068 "data_size": 7936 00:18:30.068 } 00:18:30.068 ] 00:18:30.068 }' 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.068 [2024-12-12 16:14:56.327750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.068 "name": "raid_bdev1", 00:18:30.068 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:30.068 "strip_size_kb": 0, 00:18:30.068 "state": "online", 00:18:30.068 "raid_level": "raid1", 00:18:30.068 "superblock": true, 00:18:30.068 "num_base_bdevs": 2, 00:18:30.068 "num_base_bdevs_discovered": 1, 00:18:30.068 "num_base_bdevs_operational": 1, 00:18:30.068 "base_bdevs_list": [ 00:18:30.068 { 00:18:30.068 "name": null, 00:18:30.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.068 "is_configured": false, 00:18:30.068 "data_offset": 0, 00:18:30.068 "data_size": 7936 00:18:30.068 }, 00:18:30.068 { 00:18:30.068 "name": "BaseBdev2", 00:18:30.068 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:30.068 "is_configured": true, 00:18:30.068 "data_offset": 256, 00:18:30.068 "data_size": 7936 00:18:30.068 } 00:18:30.068 ] 00:18:30.068 }' 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.068 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.637 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.637 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.637 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.637 [2024-12-12 16:14:56.779226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.637 [2024-12-12 16:14:56.779364] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.637 [2024-12-12 16:14:56.779381] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:30.637 [2024-12-12 16:14:56.779414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.637 [2024-12-12 16:14:56.793401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:30.637 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.637 16:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:30.637 [2024-12-12 16:14:56.795193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.575 "name": "raid_bdev1", 00:18:31.575 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:31.575 "strip_size_kb": 0, 00:18:31.575 "state": "online", 00:18:31.575 "raid_level": "raid1", 00:18:31.575 "superblock": true, 00:18:31.575 "num_base_bdevs": 2, 00:18:31.575 "num_base_bdevs_discovered": 2, 00:18:31.575 "num_base_bdevs_operational": 2, 00:18:31.575 "process": { 00:18:31.575 "type": "rebuild", 00:18:31.575 "target": "spare", 00:18:31.575 "progress": { 00:18:31.575 "blocks": 2560, 00:18:31.575 "percent": 32 00:18:31.575 } 00:18:31.575 }, 00:18:31.575 "base_bdevs_list": [ 00:18:31.575 { 00:18:31.575 "name": "spare", 00:18:31.575 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:31.575 "is_configured": true, 00:18:31.575 "data_offset": 256, 00:18:31.575 "data_size": 7936 00:18:31.575 }, 00:18:31.575 { 00:18:31.575 "name": "BaseBdev2", 00:18:31.575 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:31.575 "is_configured": true, 00:18:31.575 "data_offset": 256, 00:18:31.575 "data_size": 7936 00:18:31.575 } 00:18:31.575 ] 00:18:31.575 }' 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.575 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.835 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.835 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:31.835 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.835 16:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.835 [2024-12-12 16:14:57.931159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.835 [2024-12-12 16:14:58.000089] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:31.835 [2024-12-12 16:14:58.000150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.835 [2024-12-12 16:14:58.000164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.835 [2024-12-12 16:14:58.000184] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.835 "name": "raid_bdev1", 00:18:31.835 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:31.835 "strip_size_kb": 0, 00:18:31.835 "state": "online", 00:18:31.835 "raid_level": "raid1", 00:18:31.835 "superblock": true, 00:18:31.835 "num_base_bdevs": 2, 00:18:31.835 "num_base_bdevs_discovered": 1, 00:18:31.835 "num_base_bdevs_operational": 1, 00:18:31.835 "base_bdevs_list": [ 00:18:31.835 { 00:18:31.835 "name": null, 00:18:31.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.835 "is_configured": false, 00:18:31.835 "data_offset": 0, 00:18:31.835 "data_size": 7936 00:18:31.835 }, 00:18:31.835 { 00:18:31.835 "name": "BaseBdev2", 00:18:31.835 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:31.835 "is_configured": true, 00:18:31.835 "data_offset": 256, 00:18:31.835 "data_size": 7936 00:18:31.835 } 00:18:31.835 ] 00:18:31.835 }' 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.835 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.404 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:32.404 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.404 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.404 [2024-12-12 16:14:58.450623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:32.404 [2024-12-12 16:14:58.450677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.404 [2024-12-12 16:14:58.450699] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:32.404 [2024-12-12 16:14:58.450710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.404 [2024-12-12 16:14:58.450970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.404 [2024-12-12 16:14:58.450997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:32.404 [2024-12-12 16:14:58.451045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:32.404 [2024-12-12 16:14:58.451076] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.404 [2024-12-12 16:14:58.451085] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:32.404 [2024-12-12 16:14:58.451105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.404 [2024-12-12 16:14:58.464226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:32.404 spare 00:18:32.404 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.404 16:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:32.404 [2024-12-12 16:14:58.466042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.343 "name": "raid_bdev1", 00:18:33.343 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:33.343 "strip_size_kb": 0, 00:18:33.343 "state": "online", 00:18:33.343 "raid_level": "raid1", 00:18:33.343 "superblock": true, 00:18:33.343 "num_base_bdevs": 2, 00:18:33.343 "num_base_bdevs_discovered": 2, 00:18:33.343 "num_base_bdevs_operational": 2, 00:18:33.343 "process": { 00:18:33.343 "type": "rebuild", 00:18:33.343 "target": "spare", 00:18:33.343 "progress": { 00:18:33.343 "blocks": 2560, 00:18:33.343 "percent": 32 00:18:33.343 } 00:18:33.343 }, 00:18:33.343 "base_bdevs_list": [ 00:18:33.343 { 00:18:33.343 "name": "spare", 00:18:33.343 "uuid": "36d9b6e4-eb3a-55b1-aa08-030812bd62f6", 00:18:33.343 "is_configured": true, 00:18:33.343 "data_offset": 256, 00:18:33.343 "data_size": 7936 00:18:33.343 }, 00:18:33.343 { 00:18:33.343 "name": "BaseBdev2", 00:18:33.343 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:33.343 "is_configured": true, 00:18:33.343 "data_offset": 256, 00:18:33.343 "data_size": 7936 00:18:33.343 } 00:18:33.343 ] 00:18:33.343 }' 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.343 [2024-12-12 16:14:59.606103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.343 [2024-12-12 16:14:59.670440] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.343 [2024-12-12 16:14:59.670491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.343 [2024-12-12 16:14:59.670507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.343 [2024-12-12 16:14:59.670513] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.343 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.344 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.603 "name": "raid_bdev1", 00:18:33.603 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:33.603 "strip_size_kb": 0, 00:18:33.603 "state": "online", 00:18:33.603 "raid_level": "raid1", 00:18:33.603 "superblock": true, 00:18:33.603 "num_base_bdevs": 2, 00:18:33.603 "num_base_bdevs_discovered": 1, 00:18:33.603 "num_base_bdevs_operational": 1, 00:18:33.603 "base_bdevs_list": [ 00:18:33.603 { 00:18:33.603 "name": null, 00:18:33.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.603 "is_configured": false, 00:18:33.603 "data_offset": 0, 00:18:33.603 "data_size": 7936 00:18:33.603 }, 00:18:33.603 { 00:18:33.603 "name": "BaseBdev2", 00:18:33.603 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:33.603 "is_configured": true, 00:18:33.603 "data_offset": 256, 00:18:33.603 "data_size": 7936 00:18:33.603 } 00:18:33.603 ] 00:18:33.603 }' 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.603 16:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.863 "name": "raid_bdev1", 00:18:33.863 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:33.863 "strip_size_kb": 0, 00:18:33.863 "state": "online", 00:18:33.863 "raid_level": "raid1", 00:18:33.863 "superblock": true, 00:18:33.863 "num_base_bdevs": 2, 00:18:33.863 "num_base_bdevs_discovered": 1, 00:18:33.863 "num_base_bdevs_operational": 1, 00:18:33.863 "base_bdevs_list": [ 00:18:33.863 { 00:18:33.863 "name": null, 00:18:33.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.863 "is_configured": false, 00:18:33.863 "data_offset": 0, 00:18:33.863 "data_size": 7936 00:18:33.863 }, 00:18:33.863 { 00:18:33.863 "name": "BaseBdev2", 00:18:33.863 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:33.863 "is_configured": true, 00:18:33.863 "data_offset": 256, 00:18:33.863 "data_size": 7936 00:18:33.863 } 00:18:33.863 ] 00:18:33.863 }' 00:18:33.863 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.122 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:34.123 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.123 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.123 [2024-12-12 16:15:00.304082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:34.123 [2024-12-12 16:15:00.304132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.123 [2024-12-12 16:15:00.304154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:34.123 [2024-12-12 16:15:00.304163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.123 [2024-12-12 16:15:00.304363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.123 [2024-12-12 16:15:00.304380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.123 [2024-12-12 16:15:00.304428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:34.123 [2024-12-12 16:15:00.304442] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.123 [2024-12-12 16:15:00.304454] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:34.123 [2024-12-12 16:15:00.304480] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:34.123 BaseBdev1 00:18:34.123 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.123 16:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.061 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.061 "name": "raid_bdev1", 00:18:35.061 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:35.061 "strip_size_kb": 0, 00:18:35.061 "state": "online", 00:18:35.061 "raid_level": "raid1", 00:18:35.061 "superblock": true, 00:18:35.062 "num_base_bdevs": 2, 00:18:35.062 "num_base_bdevs_discovered": 1, 00:18:35.062 "num_base_bdevs_operational": 1, 00:18:35.062 "base_bdevs_list": [ 00:18:35.062 { 00:18:35.062 "name": null, 00:18:35.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.062 "is_configured": false, 00:18:35.062 "data_offset": 0, 00:18:35.062 "data_size": 7936 00:18:35.062 }, 00:18:35.062 { 00:18:35.062 "name": "BaseBdev2", 00:18:35.062 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:35.062 "is_configured": true, 00:18:35.062 "data_offset": 256, 00:18:35.062 "data_size": 7936 00:18:35.062 } 00:18:35.062 ] 00:18:35.062 }' 00:18:35.062 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.062 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.630 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.630 "name": "raid_bdev1", 00:18:35.630 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:35.630 "strip_size_kb": 0, 00:18:35.630 "state": "online", 00:18:35.630 "raid_level": "raid1", 00:18:35.630 "superblock": true, 00:18:35.630 "num_base_bdevs": 2, 00:18:35.630 "num_base_bdevs_discovered": 1, 00:18:35.630 "num_base_bdevs_operational": 1, 00:18:35.630 "base_bdevs_list": [ 00:18:35.630 { 00:18:35.630 "name": null, 00:18:35.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.630 "is_configured": false, 00:18:35.631 "data_offset": 0, 00:18:35.631 "data_size": 7936 00:18:35.631 }, 00:18:35.631 { 00:18:35.631 "name": "BaseBdev2", 00:18:35.631 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:35.631 "is_configured": true, 00:18:35.631 "data_offset": 256, 00:18:35.631 "data_size": 7936 00:18:35.631 } 00:18:35.631 ] 00:18:35.631 }' 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.631 [2024-12-12 16:15:01.913567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.631 [2024-12-12 16:15:01.913691] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.631 [2024-12-12 16:15:01.913705] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:35.631 request: 00:18:35.631 { 00:18:35.631 "base_bdev": "BaseBdev1", 00:18:35.631 "raid_bdev": "raid_bdev1", 00:18:35.631 "method": "bdev_raid_add_base_bdev", 00:18:35.631 "req_id": 1 00:18:35.631 } 00:18:35.631 Got JSON-RPC error response 00:18:35.631 response: 00:18:35.631 { 00:18:35.631 "code": -22, 00:18:35.631 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:35.631 } 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.631 16:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.010 "name": "raid_bdev1", 00:18:37.010 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:37.010 "strip_size_kb": 0, 00:18:37.010 "state": "online", 00:18:37.010 "raid_level": "raid1", 00:18:37.010 "superblock": true, 00:18:37.010 "num_base_bdevs": 2, 00:18:37.010 "num_base_bdevs_discovered": 1, 00:18:37.010 "num_base_bdevs_operational": 1, 00:18:37.010 "base_bdevs_list": [ 00:18:37.010 { 00:18:37.010 "name": null, 00:18:37.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.010 "is_configured": false, 00:18:37.010 "data_offset": 0, 00:18:37.010 "data_size": 7936 00:18:37.010 }, 00:18:37.010 { 00:18:37.010 "name": "BaseBdev2", 00:18:37.010 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:37.010 "is_configured": true, 00:18:37.010 "data_offset": 256, 00:18:37.010 "data_size": 7936 00:18:37.010 } 00:18:37.010 ] 00:18:37.010 }' 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.010 16:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.270 "name": "raid_bdev1", 00:18:37.270 "uuid": "677a9cdf-0435-49c8-8a92-6d5798cbd58a", 00:18:37.270 "strip_size_kb": 0, 00:18:37.270 "state": "online", 00:18:37.270 "raid_level": "raid1", 00:18:37.270 "superblock": true, 00:18:37.270 "num_base_bdevs": 2, 00:18:37.270 "num_base_bdevs_discovered": 1, 00:18:37.270 "num_base_bdevs_operational": 1, 00:18:37.270 "base_bdevs_list": [ 00:18:37.270 { 00:18:37.270 "name": null, 00:18:37.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.270 "is_configured": false, 00:18:37.270 "data_offset": 0, 00:18:37.270 "data_size": 7936 00:18:37.270 }, 00:18:37.270 { 00:18:37.270 "name": "BaseBdev2", 00:18:37.270 "uuid": "ffc477cc-f2ca-5c15-beea-7b0a3aa74541", 00:18:37.270 "is_configured": true, 00:18:37.270 "data_offset": 256, 00:18:37.270 "data_size": 7936 00:18:37.270 } 00:18:37.270 ] 00:18:37.270 }' 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 89857 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89857 ']' 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 89857 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89857 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.270 killing process with pid 89857 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89857' 00:18:37.270 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 89857 00:18:37.270 Received shutdown signal, test time was about 60.000000 seconds 00:18:37.270 00:18:37.270 Latency(us) 00:18:37.270 [2024-12-12T16:15:03.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.270 [2024-12-12T16:15:03.623Z] =================================================================================================================== 00:18:37.271 [2024-12-12T16:15:03.623Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.271 [2024-12-12 16:15:03.542212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.271 [2024-12-12 16:15:03.542309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.271 [2024-12-12 16:15:03.542349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.271 [2024-12-12 16:15:03.542369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:37.271 16:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 89857 00:18:37.530 [2024-12-12 16:15:03.847298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.915 16:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:38.915 00:18:38.915 real 0m19.790s 00:18:38.915 user 0m25.845s 00:18:38.915 sys 0m2.679s 00:18:38.915 16:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.915 16:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.915 ************************************ 00:18:38.915 END TEST raid_rebuild_test_sb_md_separate 00:18:38.915 ************************************ 00:18:38.915 16:15:04 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:38.915 16:15:04 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:38.915 16:15:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:38.915 16:15:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.915 16:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.915 ************************************ 00:18:38.915 START TEST raid_state_function_test_sb_md_interleaved 00:18:38.915 ************************************ 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=90548 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:38.915 Process raid pid: 90548 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90548' 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 90548 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90548 ']' 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.915 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.916 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.916 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.916 16:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.916 [2024-12-12 16:15:05.062760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:38.916 [2024-12-12 16:15:05.062918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.916 [2024-12-12 16:15:05.249788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.178 [2024-12-12 16:15:05.356320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.437 [2024-12-12 16:15:05.534450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.437 [2024-12-12 16:15:05.534489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.697 [2024-12-12 16:15:05.876829] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.697 [2024-12-12 16:15:05.876887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.697 [2024-12-12 16:15:05.876909] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.697 [2024-12-12 16:15:05.876919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.697 "name": "Existed_Raid", 00:18:39.697 "uuid": "75ac66d4-d29a-48fa-881f-11ab1ab5faf8", 00:18:39.697 "strip_size_kb": 0, 00:18:39.697 "state": "configuring", 00:18:39.697 "raid_level": "raid1", 00:18:39.697 "superblock": true, 00:18:39.697 "num_base_bdevs": 2, 00:18:39.697 "num_base_bdevs_discovered": 0, 00:18:39.697 "num_base_bdevs_operational": 2, 00:18:39.697 "base_bdevs_list": [ 00:18:39.697 { 00:18:39.697 "name": "BaseBdev1", 00:18:39.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.697 "is_configured": false, 00:18:39.697 "data_offset": 0, 00:18:39.697 "data_size": 0 00:18:39.697 }, 00:18:39.697 { 00:18:39.697 "name": "BaseBdev2", 00:18:39.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.697 "is_configured": false, 00:18:39.697 "data_offset": 0, 00:18:39.697 "data_size": 0 00:18:39.697 } 00:18:39.697 ] 00:18:39.697 }' 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.697 16:15:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.266 [2024-12-12 16:15:06.319999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.266 [2024-12-12 16:15:06.320033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.266 [2024-12-12 16:15:06.327997] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.266 [2024-12-12 16:15:06.328038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.266 [2024-12-12 16:15:06.328046] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.266 [2024-12-12 16:15:06.328056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.266 [2024-12-12 16:15:06.368747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.266 BaseBdev1 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.266 [ 00:18:40.266 { 00:18:40.266 "name": "BaseBdev1", 00:18:40.266 "aliases": [ 00:18:40.266 "a6a72b40-8705-4f94-8240-0bb1eecfb061" 00:18:40.266 ], 00:18:40.266 "product_name": "Malloc disk", 00:18:40.266 "block_size": 4128, 00:18:40.266 "num_blocks": 8192, 00:18:40.266 "uuid": "a6a72b40-8705-4f94-8240-0bb1eecfb061", 00:18:40.266 "md_size": 32, 00:18:40.266 "md_interleave": true, 00:18:40.266 "dif_type": 0, 00:18:40.266 "assigned_rate_limits": { 00:18:40.266 "rw_ios_per_sec": 0, 00:18:40.266 "rw_mbytes_per_sec": 0, 00:18:40.266 "r_mbytes_per_sec": 0, 00:18:40.266 "w_mbytes_per_sec": 0 00:18:40.266 }, 00:18:40.266 "claimed": true, 00:18:40.266 "claim_type": "exclusive_write", 00:18:40.266 "zoned": false, 00:18:40.266 "supported_io_types": { 00:18:40.266 "read": true, 00:18:40.266 "write": true, 00:18:40.266 "unmap": true, 00:18:40.266 "flush": true, 00:18:40.266 "reset": true, 00:18:40.266 "nvme_admin": false, 00:18:40.266 "nvme_io": false, 00:18:40.266 "nvme_io_md": false, 00:18:40.266 "write_zeroes": true, 00:18:40.266 "zcopy": true, 00:18:40.266 "get_zone_info": false, 00:18:40.266 "zone_management": false, 00:18:40.266 "zone_append": false, 00:18:40.266 "compare": false, 00:18:40.266 "compare_and_write": false, 00:18:40.266 "abort": true, 00:18:40.266 "seek_hole": false, 00:18:40.266 "seek_data": false, 00:18:40.266 "copy": true, 00:18:40.266 "nvme_iov_md": false 00:18:40.266 }, 00:18:40.266 "memory_domains": [ 00:18:40.266 { 00:18:40.266 "dma_device_id": "system", 00:18:40.266 "dma_device_type": 1 00:18:40.266 }, 00:18:40.266 { 00:18:40.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.266 "dma_device_type": 2 00:18:40.266 } 00:18:40.266 ], 00:18:40.266 "driver_specific": {} 00:18:40.266 } 00:18:40.266 ] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.266 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.266 "name": "Existed_Raid", 00:18:40.266 "uuid": "a237d063-f8e7-458f-a3ff-428e23e72dce", 00:18:40.266 "strip_size_kb": 0, 00:18:40.266 "state": "configuring", 00:18:40.267 "raid_level": "raid1", 00:18:40.267 "superblock": true, 00:18:40.267 "num_base_bdevs": 2, 00:18:40.267 "num_base_bdevs_discovered": 1, 00:18:40.267 "num_base_bdevs_operational": 2, 00:18:40.267 "base_bdevs_list": [ 00:18:40.267 { 00:18:40.267 "name": "BaseBdev1", 00:18:40.267 "uuid": "a6a72b40-8705-4f94-8240-0bb1eecfb061", 00:18:40.267 "is_configured": true, 00:18:40.267 "data_offset": 256, 00:18:40.267 "data_size": 7936 00:18:40.267 }, 00:18:40.267 { 00:18:40.267 "name": "BaseBdev2", 00:18:40.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.267 "is_configured": false, 00:18:40.267 "data_offset": 0, 00:18:40.267 "data_size": 0 00:18:40.267 } 00:18:40.267 ] 00:18:40.267 }' 00:18:40.267 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.267 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.526 [2024-12-12 16:15:06.839985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.526 [2024-12-12 16:15:06.840022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.526 [2024-12-12 16:15:06.852009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.526 [2024-12-12 16:15:06.853706] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.526 [2024-12-12 16:15:06.853747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.526 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.786 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.786 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.786 "name": "Existed_Raid", 00:18:40.786 "uuid": "58223b86-9175-46fb-8200-51ba2bdc01c6", 00:18:40.786 "strip_size_kb": 0, 00:18:40.786 "state": "configuring", 00:18:40.786 "raid_level": "raid1", 00:18:40.786 "superblock": true, 00:18:40.786 "num_base_bdevs": 2, 00:18:40.786 "num_base_bdevs_discovered": 1, 00:18:40.786 "num_base_bdevs_operational": 2, 00:18:40.786 "base_bdevs_list": [ 00:18:40.786 { 00:18:40.786 "name": "BaseBdev1", 00:18:40.786 "uuid": "a6a72b40-8705-4f94-8240-0bb1eecfb061", 00:18:40.786 "is_configured": true, 00:18:40.786 "data_offset": 256, 00:18:40.786 "data_size": 7936 00:18:40.786 }, 00:18:40.786 { 00:18:40.786 "name": "BaseBdev2", 00:18:40.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.786 "is_configured": false, 00:18:40.786 "data_offset": 0, 00:18:40.786 "data_size": 0 00:18:40.786 } 00:18:40.786 ] 00:18:40.786 }' 00:18:40.786 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.786 16:15:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.046 [2024-12-12 16:15:07.355612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.046 [2024-12-12 16:15:07.355832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:41.046 [2024-12-12 16:15:07.355846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:41.046 [2024-12-12 16:15:07.355939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:41.046 [2024-12-12 16:15:07.356012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:41.046 [2024-12-12 16:15:07.356028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:41.046 [2024-12-12 16:15:07.356103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.046 BaseBdev2 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.046 [ 00:18:41.046 { 00:18:41.046 "name": "BaseBdev2", 00:18:41.046 "aliases": [ 00:18:41.046 "eaeff49c-e3d0-48d7-a322-50bcf02ae321" 00:18:41.046 ], 00:18:41.046 "product_name": "Malloc disk", 00:18:41.046 "block_size": 4128, 00:18:41.046 "num_blocks": 8192, 00:18:41.046 "uuid": "eaeff49c-e3d0-48d7-a322-50bcf02ae321", 00:18:41.046 "md_size": 32, 00:18:41.046 "md_interleave": true, 00:18:41.046 "dif_type": 0, 00:18:41.046 "assigned_rate_limits": { 00:18:41.046 "rw_ios_per_sec": 0, 00:18:41.046 "rw_mbytes_per_sec": 0, 00:18:41.046 "r_mbytes_per_sec": 0, 00:18:41.046 "w_mbytes_per_sec": 0 00:18:41.046 }, 00:18:41.046 "claimed": true, 00:18:41.046 "claim_type": "exclusive_write", 00:18:41.046 "zoned": false, 00:18:41.046 "supported_io_types": { 00:18:41.046 "read": true, 00:18:41.046 "write": true, 00:18:41.046 "unmap": true, 00:18:41.046 "flush": true, 00:18:41.046 "reset": true, 00:18:41.046 "nvme_admin": false, 00:18:41.046 "nvme_io": false, 00:18:41.046 "nvme_io_md": false, 00:18:41.046 "write_zeroes": true, 00:18:41.046 "zcopy": true, 00:18:41.046 "get_zone_info": false, 00:18:41.046 "zone_management": false, 00:18:41.046 "zone_append": false, 00:18:41.046 "compare": false, 00:18:41.046 "compare_and_write": false, 00:18:41.046 "abort": true, 00:18:41.046 "seek_hole": false, 00:18:41.046 "seek_data": false, 00:18:41.046 "copy": true, 00:18:41.046 "nvme_iov_md": false 00:18:41.046 }, 00:18:41.046 "memory_domains": [ 00:18:41.046 { 00:18:41.046 "dma_device_id": "system", 00:18:41.046 "dma_device_type": 1 00:18:41.046 }, 00:18:41.046 { 00:18:41.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.046 "dma_device_type": 2 00:18:41.046 } 00:18:41.046 ], 00:18:41.046 "driver_specific": {} 00:18:41.046 } 00:18:41.046 ] 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.046 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.306 "name": "Existed_Raid", 00:18:41.306 "uuid": "58223b86-9175-46fb-8200-51ba2bdc01c6", 00:18:41.306 "strip_size_kb": 0, 00:18:41.306 "state": "online", 00:18:41.306 "raid_level": "raid1", 00:18:41.306 "superblock": true, 00:18:41.306 "num_base_bdevs": 2, 00:18:41.306 "num_base_bdevs_discovered": 2, 00:18:41.306 "num_base_bdevs_operational": 2, 00:18:41.306 "base_bdevs_list": [ 00:18:41.306 { 00:18:41.306 "name": "BaseBdev1", 00:18:41.306 "uuid": "a6a72b40-8705-4f94-8240-0bb1eecfb061", 00:18:41.306 "is_configured": true, 00:18:41.306 "data_offset": 256, 00:18:41.306 "data_size": 7936 00:18:41.306 }, 00:18:41.306 { 00:18:41.306 "name": "BaseBdev2", 00:18:41.306 "uuid": "eaeff49c-e3d0-48d7-a322-50bcf02ae321", 00:18:41.306 "is_configured": true, 00:18:41.306 "data_offset": 256, 00:18:41.306 "data_size": 7936 00:18:41.306 } 00:18:41.306 ] 00:18:41.306 }' 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.306 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.566 [2024-12-12 16:15:07.839070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:41.566 "name": "Existed_Raid", 00:18:41.566 "aliases": [ 00:18:41.566 "58223b86-9175-46fb-8200-51ba2bdc01c6" 00:18:41.566 ], 00:18:41.566 "product_name": "Raid Volume", 00:18:41.566 "block_size": 4128, 00:18:41.566 "num_blocks": 7936, 00:18:41.566 "uuid": "58223b86-9175-46fb-8200-51ba2bdc01c6", 00:18:41.566 "md_size": 32, 00:18:41.566 "md_interleave": true, 00:18:41.566 "dif_type": 0, 00:18:41.566 "assigned_rate_limits": { 00:18:41.566 "rw_ios_per_sec": 0, 00:18:41.566 "rw_mbytes_per_sec": 0, 00:18:41.566 "r_mbytes_per_sec": 0, 00:18:41.566 "w_mbytes_per_sec": 0 00:18:41.566 }, 00:18:41.566 "claimed": false, 00:18:41.566 "zoned": false, 00:18:41.566 "supported_io_types": { 00:18:41.566 "read": true, 00:18:41.566 "write": true, 00:18:41.566 "unmap": false, 00:18:41.566 "flush": false, 00:18:41.566 "reset": true, 00:18:41.566 "nvme_admin": false, 00:18:41.566 "nvme_io": false, 00:18:41.566 "nvme_io_md": false, 00:18:41.566 "write_zeroes": true, 00:18:41.566 "zcopy": false, 00:18:41.566 "get_zone_info": false, 00:18:41.566 "zone_management": false, 00:18:41.566 "zone_append": false, 00:18:41.566 "compare": false, 00:18:41.566 "compare_and_write": false, 00:18:41.566 "abort": false, 00:18:41.566 "seek_hole": false, 00:18:41.566 "seek_data": false, 00:18:41.566 "copy": false, 00:18:41.566 "nvme_iov_md": false 00:18:41.566 }, 00:18:41.566 "memory_domains": [ 00:18:41.566 { 00:18:41.566 "dma_device_id": "system", 00:18:41.566 "dma_device_type": 1 00:18:41.566 }, 00:18:41.566 { 00:18:41.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.566 "dma_device_type": 2 00:18:41.566 }, 00:18:41.566 { 00:18:41.566 "dma_device_id": "system", 00:18:41.566 "dma_device_type": 1 00:18:41.566 }, 00:18:41.566 { 00:18:41.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.566 "dma_device_type": 2 00:18:41.566 } 00:18:41.566 ], 00:18:41.566 "driver_specific": { 00:18:41.566 "raid": { 00:18:41.566 "uuid": "58223b86-9175-46fb-8200-51ba2bdc01c6", 00:18:41.566 "strip_size_kb": 0, 00:18:41.566 "state": "online", 00:18:41.566 "raid_level": "raid1", 00:18:41.566 "superblock": true, 00:18:41.566 "num_base_bdevs": 2, 00:18:41.566 "num_base_bdevs_discovered": 2, 00:18:41.566 "num_base_bdevs_operational": 2, 00:18:41.566 "base_bdevs_list": [ 00:18:41.566 { 00:18:41.566 "name": "BaseBdev1", 00:18:41.566 "uuid": "a6a72b40-8705-4f94-8240-0bb1eecfb061", 00:18:41.566 "is_configured": true, 00:18:41.566 "data_offset": 256, 00:18:41.566 "data_size": 7936 00:18:41.566 }, 00:18:41.566 { 00:18:41.566 "name": "BaseBdev2", 00:18:41.566 "uuid": "eaeff49c-e3d0-48d7-a322-50bcf02ae321", 00:18:41.566 "is_configured": true, 00:18:41.566 "data_offset": 256, 00:18:41.566 "data_size": 7936 00:18:41.566 } 00:18:41.566 ] 00:18:41.566 } 00:18:41.566 } 00:18:41.566 }' 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:41.566 BaseBdev2' 00:18:41.566 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.826 16:15:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.826 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.826 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:41.826 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.827 [2024-12-12 16:15:08.042502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.827 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.086 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.086 "name": "Existed_Raid", 00:18:42.086 "uuid": "58223b86-9175-46fb-8200-51ba2bdc01c6", 00:18:42.086 "strip_size_kb": 0, 00:18:42.086 "state": "online", 00:18:42.086 "raid_level": "raid1", 00:18:42.086 "superblock": true, 00:18:42.086 "num_base_bdevs": 2, 00:18:42.086 "num_base_bdevs_discovered": 1, 00:18:42.086 "num_base_bdevs_operational": 1, 00:18:42.086 "base_bdevs_list": [ 00:18:42.086 { 00:18:42.086 "name": null, 00:18:42.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.086 "is_configured": false, 00:18:42.086 "data_offset": 0, 00:18:42.086 "data_size": 7936 00:18:42.086 }, 00:18:42.086 { 00:18:42.086 "name": "BaseBdev2", 00:18:42.086 "uuid": "eaeff49c-e3d0-48d7-a322-50bcf02ae321", 00:18:42.086 "is_configured": true, 00:18:42.086 "data_offset": 256, 00:18:42.086 "data_size": 7936 00:18:42.086 } 00:18:42.086 ] 00:18:42.086 }' 00:18:42.086 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.086 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.345 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:42.345 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.345 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.346 [2024-12-12 16:15:08.592600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.346 [2024-12-12 16:15:08.592710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.346 [2024-12-12 16:15:08.681528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.346 [2024-12-12 16:15:08.681580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.346 [2024-12-12 16:15:08.681591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.346 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.605 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.605 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:42.605 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 90548 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90548 ']' 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90548 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90548 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.606 killing process with pid 90548 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90548' 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90548 00:18:42.606 [2024-12-12 16:15:08.781327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:42.606 16:15:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90548 00:18:42.606 [2024-12-12 16:15:08.797498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.554 16:15:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:43.554 00:18:43.554 real 0m4.918s 00:18:43.554 user 0m7.059s 00:18:43.554 sys 0m0.906s 00:18:43.554 16:15:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.554 16:15:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.554 ************************************ 00:18:43.554 END TEST raid_state_function_test_sb_md_interleaved 00:18:43.554 ************************************ 00:18:43.832 16:15:09 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:43.832 16:15:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:43.832 16:15:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.832 16:15:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.832 ************************************ 00:18:43.832 START TEST raid_superblock_test_md_interleaved 00:18:43.832 ************************************ 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=90798 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 90798 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90798 ']' 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.832 16:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.832 [2024-12-12 16:15:10.061118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:43.832 [2024-12-12 16:15:10.061257] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90798 ] 00:18:44.116 [2024-12-12 16:15:10.240582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.116 [2024-12-12 16:15:10.348118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.394 [2024-12-12 16:15:10.532636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.394 [2024-12-12 16:15:10.532695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.652 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 malloc1 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 [2024-12-12 16:15:10.907197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:44.653 [2024-12-12 16:15:10.907261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.653 [2024-12-12 16:15:10.907282] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:44.653 [2024-12-12 16:15:10.907291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.653 [2024-12-12 16:15:10.909116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.653 [2024-12-12 16:15:10.909151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:44.653 pt1 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 malloc2 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 [2024-12-12 16:15:10.960324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:44.653 [2024-12-12 16:15:10.960379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.653 [2024-12-12 16:15:10.960400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:44.653 [2024-12-12 16:15:10.960408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.653 [2024-12-12 16:15:10.962203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.653 [2024-12-12 16:15:10.962236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:44.653 pt2 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 [2024-12-12 16:15:10.972338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:44.653 [2024-12-12 16:15:10.974105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.653 [2024-12-12 16:15:10.974277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:44.653 [2024-12-12 16:15:10.974289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:44.653 [2024-12-12 16:15:10.974357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:44.653 [2024-12-12 16:15:10.974442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:44.653 [2024-12-12 16:15:10.974489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:44.653 [2024-12-12 16:15:10.974560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 16:15:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.911 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.911 "name": "raid_bdev1", 00:18:44.911 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:44.911 "strip_size_kb": 0, 00:18:44.911 "state": "online", 00:18:44.911 "raid_level": "raid1", 00:18:44.911 "superblock": true, 00:18:44.911 "num_base_bdevs": 2, 00:18:44.911 "num_base_bdevs_discovered": 2, 00:18:44.911 "num_base_bdevs_operational": 2, 00:18:44.911 "base_bdevs_list": [ 00:18:44.911 { 00:18:44.911 "name": "pt1", 00:18:44.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.911 "is_configured": true, 00:18:44.911 "data_offset": 256, 00:18:44.911 "data_size": 7936 00:18:44.911 }, 00:18:44.911 { 00:18:44.911 "name": "pt2", 00:18:44.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.911 "is_configured": true, 00:18:44.911 "data_offset": 256, 00:18:44.911 "data_size": 7936 00:18:44.911 } 00:18:44.911 ] 00:18:44.911 }' 00:18:44.911 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.911 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:45.171 [2024-12-12 16:15:11.423809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.171 "name": "raid_bdev1", 00:18:45.171 "aliases": [ 00:18:45.171 "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45" 00:18:45.171 ], 00:18:45.171 "product_name": "Raid Volume", 00:18:45.171 "block_size": 4128, 00:18:45.171 "num_blocks": 7936, 00:18:45.171 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:45.171 "md_size": 32, 00:18:45.171 "md_interleave": true, 00:18:45.171 "dif_type": 0, 00:18:45.171 "assigned_rate_limits": { 00:18:45.171 "rw_ios_per_sec": 0, 00:18:45.171 "rw_mbytes_per_sec": 0, 00:18:45.171 "r_mbytes_per_sec": 0, 00:18:45.171 "w_mbytes_per_sec": 0 00:18:45.171 }, 00:18:45.171 "claimed": false, 00:18:45.171 "zoned": false, 00:18:45.171 "supported_io_types": { 00:18:45.171 "read": true, 00:18:45.171 "write": true, 00:18:45.171 "unmap": false, 00:18:45.171 "flush": false, 00:18:45.171 "reset": true, 00:18:45.171 "nvme_admin": false, 00:18:45.171 "nvme_io": false, 00:18:45.171 "nvme_io_md": false, 00:18:45.171 "write_zeroes": true, 00:18:45.171 "zcopy": false, 00:18:45.171 "get_zone_info": false, 00:18:45.171 "zone_management": false, 00:18:45.171 "zone_append": false, 00:18:45.171 "compare": false, 00:18:45.171 "compare_and_write": false, 00:18:45.171 "abort": false, 00:18:45.171 "seek_hole": false, 00:18:45.171 "seek_data": false, 00:18:45.171 "copy": false, 00:18:45.171 "nvme_iov_md": false 00:18:45.171 }, 00:18:45.171 "memory_domains": [ 00:18:45.171 { 00:18:45.171 "dma_device_id": "system", 00:18:45.171 "dma_device_type": 1 00:18:45.171 }, 00:18:45.171 { 00:18:45.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.171 "dma_device_type": 2 00:18:45.171 }, 00:18:45.171 { 00:18:45.171 "dma_device_id": "system", 00:18:45.171 "dma_device_type": 1 00:18:45.171 }, 00:18:45.171 { 00:18:45.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.171 "dma_device_type": 2 00:18:45.171 } 00:18:45.171 ], 00:18:45.171 "driver_specific": { 00:18:45.171 "raid": { 00:18:45.171 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:45.171 "strip_size_kb": 0, 00:18:45.171 "state": "online", 00:18:45.171 "raid_level": "raid1", 00:18:45.171 "superblock": true, 00:18:45.171 "num_base_bdevs": 2, 00:18:45.171 "num_base_bdevs_discovered": 2, 00:18:45.171 "num_base_bdevs_operational": 2, 00:18:45.171 "base_bdevs_list": [ 00:18:45.171 { 00:18:45.171 "name": "pt1", 00:18:45.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.171 "is_configured": true, 00:18:45.171 "data_offset": 256, 00:18:45.171 "data_size": 7936 00:18:45.171 }, 00:18:45.171 { 00:18:45.171 "name": "pt2", 00:18:45.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.171 "is_configured": true, 00:18:45.171 "data_offset": 256, 00:18:45.171 "data_size": 7936 00:18:45.171 } 00:18:45.171 ] 00:18:45.171 } 00:18:45.171 } 00:18:45.171 }' 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:45.171 pt2' 00:18:45.171 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 [2024-12-12 16:15:11.639389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e0c0585-46f4-4f2f-a562-8c9f6cdffe45 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 9e0c0585-46f4-4f2f-a562-8c9f6cdffe45 ']' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 [2024-12-12 16:15:11.683079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.431 [2024-12-12 16:15:11.683111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.431 [2024-12-12 16:15:11.683182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.431 [2024-12-12 16:15:11.683228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.431 [2024-12-12 16:15:11.683239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:45.431 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.432 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.432 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:45.432 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.691 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.692 [2024-12-12 16:15:11.806886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:45.692 [2024-12-12 16:15:11.808757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:45.692 [2024-12-12 16:15:11.808845] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:45.692 [2024-12-12 16:15:11.808903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:45.692 [2024-12-12 16:15:11.808918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.692 [2024-12-12 16:15:11.808928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:45.692 request: 00:18:45.692 { 00:18:45.692 "name": "raid_bdev1", 00:18:45.692 "raid_level": "raid1", 00:18:45.692 "base_bdevs": [ 00:18:45.692 "malloc1", 00:18:45.692 "malloc2" 00:18:45.692 ], 00:18:45.692 "superblock": false, 00:18:45.692 "method": "bdev_raid_create", 00:18:45.692 "req_id": 1 00:18:45.692 } 00:18:45.692 Got JSON-RPC error response 00:18:45.692 response: 00:18:45.692 { 00:18:45.692 "code": -17, 00:18:45.692 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:45.692 } 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.692 [2024-12-12 16:15:11.874751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:45.692 [2024-12-12 16:15:11.874816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.692 [2024-12-12 16:15:11.874830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:45.692 [2024-12-12 16:15:11.874839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.692 [2024-12-12 16:15:11.876727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.692 [2024-12-12 16:15:11.876766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:45.692 [2024-12-12 16:15:11.876808] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:45.692 [2024-12-12 16:15:11.876855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.692 pt1 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.692 "name": "raid_bdev1", 00:18:45.692 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:45.692 "strip_size_kb": 0, 00:18:45.692 "state": "configuring", 00:18:45.692 "raid_level": "raid1", 00:18:45.692 "superblock": true, 00:18:45.692 "num_base_bdevs": 2, 00:18:45.692 "num_base_bdevs_discovered": 1, 00:18:45.692 "num_base_bdevs_operational": 2, 00:18:45.692 "base_bdevs_list": [ 00:18:45.692 { 00:18:45.692 "name": "pt1", 00:18:45.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.692 "is_configured": true, 00:18:45.692 "data_offset": 256, 00:18:45.692 "data_size": 7936 00:18:45.692 }, 00:18:45.692 { 00:18:45.692 "name": null, 00:18:45.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.692 "is_configured": false, 00:18:45.692 "data_offset": 256, 00:18:45.692 "data_size": 7936 00:18:45.692 } 00:18:45.692 ] 00:18:45.692 }' 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.692 16:15:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.952 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:45.952 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:45.952 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:45.952 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:45.952 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.952 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.952 [2024-12-12 16:15:12.302021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:45.952 [2024-12-12 16:15:12.302092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.952 [2024-12-12 16:15:12.302111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:45.952 [2024-12-12 16:15:12.302121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.952 [2024-12-12 16:15:12.302238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.952 [2024-12-12 16:15:12.302253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:45.952 [2024-12-12 16:15:12.302288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:45.952 [2024-12-12 16:15:12.302305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.952 [2024-12-12 16:15:12.302374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:45.952 [2024-12-12 16:15:12.302402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:45.952 [2024-12-12 16:15:12.302468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:45.952 [2024-12-12 16:15:12.302528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:45.952 [2024-12-12 16:15:12.302535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:45.952 [2024-12-12 16:15:12.302588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.211 pt2 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.211 "name": "raid_bdev1", 00:18:46.211 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:46.211 "strip_size_kb": 0, 00:18:46.211 "state": "online", 00:18:46.211 "raid_level": "raid1", 00:18:46.211 "superblock": true, 00:18:46.211 "num_base_bdevs": 2, 00:18:46.211 "num_base_bdevs_discovered": 2, 00:18:46.211 "num_base_bdevs_operational": 2, 00:18:46.211 "base_bdevs_list": [ 00:18:46.211 { 00:18:46.211 "name": "pt1", 00:18:46.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:46.211 "is_configured": true, 00:18:46.211 "data_offset": 256, 00:18:46.211 "data_size": 7936 00:18:46.211 }, 00:18:46.211 { 00:18:46.211 "name": "pt2", 00:18:46.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.211 "is_configured": true, 00:18:46.211 "data_offset": 256, 00:18:46.211 "data_size": 7936 00:18:46.211 } 00:18:46.211 ] 00:18:46.211 }' 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.211 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:46.471 [2024-12-12 16:15:12.705518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.471 "name": "raid_bdev1", 00:18:46.471 "aliases": [ 00:18:46.471 "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45" 00:18:46.471 ], 00:18:46.471 "product_name": "Raid Volume", 00:18:46.471 "block_size": 4128, 00:18:46.471 "num_blocks": 7936, 00:18:46.471 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:46.471 "md_size": 32, 00:18:46.471 "md_interleave": true, 00:18:46.471 "dif_type": 0, 00:18:46.471 "assigned_rate_limits": { 00:18:46.471 "rw_ios_per_sec": 0, 00:18:46.471 "rw_mbytes_per_sec": 0, 00:18:46.471 "r_mbytes_per_sec": 0, 00:18:46.471 "w_mbytes_per_sec": 0 00:18:46.471 }, 00:18:46.471 "claimed": false, 00:18:46.471 "zoned": false, 00:18:46.471 "supported_io_types": { 00:18:46.471 "read": true, 00:18:46.471 "write": true, 00:18:46.471 "unmap": false, 00:18:46.471 "flush": false, 00:18:46.471 "reset": true, 00:18:46.471 "nvme_admin": false, 00:18:46.471 "nvme_io": false, 00:18:46.471 "nvme_io_md": false, 00:18:46.471 "write_zeroes": true, 00:18:46.471 "zcopy": false, 00:18:46.471 "get_zone_info": false, 00:18:46.471 "zone_management": false, 00:18:46.471 "zone_append": false, 00:18:46.471 "compare": false, 00:18:46.471 "compare_and_write": false, 00:18:46.471 "abort": false, 00:18:46.471 "seek_hole": false, 00:18:46.471 "seek_data": false, 00:18:46.471 "copy": false, 00:18:46.471 "nvme_iov_md": false 00:18:46.471 }, 00:18:46.471 "memory_domains": [ 00:18:46.471 { 00:18:46.471 "dma_device_id": "system", 00:18:46.471 "dma_device_type": 1 00:18:46.471 }, 00:18:46.471 { 00:18:46.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.471 "dma_device_type": 2 00:18:46.471 }, 00:18:46.471 { 00:18:46.471 "dma_device_id": "system", 00:18:46.471 "dma_device_type": 1 00:18:46.471 }, 00:18:46.471 { 00:18:46.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.471 "dma_device_type": 2 00:18:46.471 } 00:18:46.471 ], 00:18:46.471 "driver_specific": { 00:18:46.471 "raid": { 00:18:46.471 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:46.471 "strip_size_kb": 0, 00:18:46.471 "state": "online", 00:18:46.471 "raid_level": "raid1", 00:18:46.471 "superblock": true, 00:18:46.471 "num_base_bdevs": 2, 00:18:46.471 "num_base_bdevs_discovered": 2, 00:18:46.471 "num_base_bdevs_operational": 2, 00:18:46.471 "base_bdevs_list": [ 00:18:46.471 { 00:18:46.471 "name": "pt1", 00:18:46.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:46.471 "is_configured": true, 00:18:46.471 "data_offset": 256, 00:18:46.471 "data_size": 7936 00:18:46.471 }, 00:18:46.471 { 00:18:46.471 "name": "pt2", 00:18:46.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.471 "is_configured": true, 00:18:46.471 "data_offset": 256, 00:18:46.471 "data_size": 7936 00:18:46.471 } 00:18:46.471 ] 00:18:46.471 } 00:18:46.471 } 00:18:46.471 }' 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:46.471 pt2' 00:18:46.471 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.732 [2024-12-12 16:15:12.941169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 9e0c0585-46f4-4f2f-a562-8c9f6cdffe45 '!=' 9e0c0585-46f4-4f2f-a562-8c9f6cdffe45 ']' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.732 [2024-12-12 16:15:12.964933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.732 16:15:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.732 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.732 "name": "raid_bdev1", 00:18:46.732 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:46.732 "strip_size_kb": 0, 00:18:46.732 "state": "online", 00:18:46.732 "raid_level": "raid1", 00:18:46.732 "superblock": true, 00:18:46.732 "num_base_bdevs": 2, 00:18:46.732 "num_base_bdevs_discovered": 1, 00:18:46.732 "num_base_bdevs_operational": 1, 00:18:46.732 "base_bdevs_list": [ 00:18:46.732 { 00:18:46.732 "name": null, 00:18:46.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.732 "is_configured": false, 00:18:46.732 "data_offset": 0, 00:18:46.732 "data_size": 7936 00:18:46.732 }, 00:18:46.732 { 00:18:46.732 "name": "pt2", 00:18:46.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.732 "is_configured": true, 00:18:46.732 "data_offset": 256, 00:18:46.732 "data_size": 7936 00:18:46.732 } 00:18:46.732 ] 00:18:46.732 }' 00:18:46.732 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.732 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.302 [2024-12-12 16:15:13.364191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.302 [2024-12-12 16:15:13.364215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.302 [2024-12-12 16:15:13.364263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.302 [2024-12-12 16:15:13.364298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.302 [2024-12-12 16:15:13.364308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.302 [2024-12-12 16:15:13.436083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:47.302 [2024-12-12 16:15:13.436133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.302 [2024-12-12 16:15:13.436147] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:47.302 [2024-12-12 16:15:13.436157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.302 [2024-12-12 16:15:13.437950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.302 [2024-12-12 16:15:13.437987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:47.302 [2024-12-12 16:15:13.438025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:47.302 [2024-12-12 16:15:13.438064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.302 [2024-12-12 16:15:13.438113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:47.302 [2024-12-12 16:15:13.438123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:47.302 [2024-12-12 16:15:13.438196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:47.302 [2024-12-12 16:15:13.438268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:47.302 [2024-12-12 16:15:13.438280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:47.302 [2024-12-12 16:15:13.438331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.302 pt2 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.302 "name": "raid_bdev1", 00:18:47.302 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:47.302 "strip_size_kb": 0, 00:18:47.302 "state": "online", 00:18:47.302 "raid_level": "raid1", 00:18:47.302 "superblock": true, 00:18:47.302 "num_base_bdevs": 2, 00:18:47.302 "num_base_bdevs_discovered": 1, 00:18:47.302 "num_base_bdevs_operational": 1, 00:18:47.302 "base_bdevs_list": [ 00:18:47.302 { 00:18:47.302 "name": null, 00:18:47.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.302 "is_configured": false, 00:18:47.302 "data_offset": 256, 00:18:47.302 "data_size": 7936 00:18:47.302 }, 00:18:47.302 { 00:18:47.302 "name": "pt2", 00:18:47.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.302 "is_configured": true, 00:18:47.302 "data_offset": 256, 00:18:47.302 "data_size": 7936 00:18:47.302 } 00:18:47.302 ] 00:18:47.302 }' 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.302 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.562 [2024-12-12 16:15:13.879501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.562 [2024-12-12 16:15:13.879527] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.562 [2024-12-12 16:15:13.879568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.562 [2024-12-12 16:15:13.879602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.562 [2024-12-12 16:15:13.879610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.562 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.822 [2024-12-12 16:15:13.927449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:47.822 [2024-12-12 16:15:13.927510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.822 [2024-12-12 16:15:13.927526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:47.822 [2024-12-12 16:15:13.927534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.822 [2024-12-12 16:15:13.929383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.822 [2024-12-12 16:15:13.929418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:47.822 [2024-12-12 16:15:13.929475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:47.822 [2024-12-12 16:15:13.929516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:47.822 [2024-12-12 16:15:13.929591] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:47.822 [2024-12-12 16:15:13.929601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.822 [2024-12-12 16:15:13.929615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:47.822 [2024-12-12 16:15:13.929676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.822 [2024-12-12 16:15:13.929743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:47.822 [2024-12-12 16:15:13.929771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:47.822 [2024-12-12 16:15:13.929832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:47.822 [2024-12-12 16:15:13.929888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:47.822 [2024-12-12 16:15:13.929910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:47.822 [2024-12-12 16:15:13.929976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.822 pt1 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.822 "name": "raid_bdev1", 00:18:47.822 "uuid": "9e0c0585-46f4-4f2f-a562-8c9f6cdffe45", 00:18:47.822 "strip_size_kb": 0, 00:18:47.822 "state": "online", 00:18:47.822 "raid_level": "raid1", 00:18:47.822 "superblock": true, 00:18:47.822 "num_base_bdevs": 2, 00:18:47.822 "num_base_bdevs_discovered": 1, 00:18:47.822 "num_base_bdevs_operational": 1, 00:18:47.822 "base_bdevs_list": [ 00:18:47.822 { 00:18:47.822 "name": null, 00:18:47.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.822 "is_configured": false, 00:18:47.822 "data_offset": 256, 00:18:47.822 "data_size": 7936 00:18:47.822 }, 00:18:47.822 { 00:18:47.822 "name": "pt2", 00:18:47.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.822 "is_configured": true, 00:18:47.822 "data_offset": 256, 00:18:47.822 "data_size": 7936 00:18:47.822 } 00:18:47.822 ] 00:18:47.822 }' 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.822 16:15:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.082 [2024-12-12 16:15:14.406846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.082 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 9e0c0585-46f4-4f2f-a562-8c9f6cdffe45 '!=' 9e0c0585-46f4-4f2f-a562-8c9f6cdffe45 ']' 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 90798 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90798 ']' 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90798 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90798 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.342 killing process with pid 90798 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90798' 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 90798 00:18:48.342 [2024-12-12 16:15:14.493585] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.342 [2024-12-12 16:15:14.493646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.342 [2024-12-12 16:15:14.493687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.342 [2024-12-12 16:15:14.493699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:48.342 16:15:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 90798 00:18:48.342 [2024-12-12 16:15:14.685280] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.720 16:15:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:49.720 00:18:49.720 real 0m5.787s 00:18:49.720 user 0m8.658s 00:18:49.720 sys 0m1.152s 00:18:49.720 16:15:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.720 16:15:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.720 ************************************ 00:18:49.720 END TEST raid_superblock_test_md_interleaved 00:18:49.720 ************************************ 00:18:49.720 16:15:15 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:49.720 16:15:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:49.720 16:15:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.720 16:15:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.720 ************************************ 00:18:49.720 START TEST raid_rebuild_test_sb_md_interleaved 00:18:49.720 ************************************ 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=91121 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 91121 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 91121 ']' 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.721 16:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.721 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:49.721 Zero copy mechanism will not be used. 00:18:49.721 [2024-12-12 16:15:15.934794] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:49.721 [2024-12-12 16:15:15.934953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91121 ] 00:18:49.981 [2024-12-12 16:15:16.113756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.981 [2024-12-12 16:15:16.216946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.240 [2024-12-12 16:15:16.404205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.240 [2024-12-12 16:15:16.404239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.499 BaseBdev1_malloc 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.499 [2024-12-12 16:15:16.786356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:50.499 [2024-12-12 16:15:16.786434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.499 [2024-12-12 16:15:16.786456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:50.499 [2024-12-12 16:15:16.786467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.499 [2024-12-12 16:15:16.788253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.499 [2024-12-12 16:15:16.788291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:50.499 BaseBdev1 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.499 BaseBdev2_malloc 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.499 [2024-12-12 16:15:16.838722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:50.499 [2024-12-12 16:15:16.838793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.499 [2024-12-12 16:15:16.838812] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:50.499 [2024-12-12 16:15:16.838824] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.499 [2024-12-12 16:15:16.840627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.499 [2024-12-12 16:15:16.840664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:50.499 BaseBdev2 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.499 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:50.500 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.500 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.759 spare_malloc 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.759 spare_delay 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.759 [2024-12-12 16:15:16.929533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:50.759 [2024-12-12 16:15:16.929605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.759 [2024-12-12 16:15:16.929625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:50.759 [2024-12-12 16:15:16.929635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.759 [2024-12-12 16:15:16.931417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.759 [2024-12-12 16:15:16.931456] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:50.759 spare 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.759 [2024-12-12 16:15:16.941558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.759 [2024-12-12 16:15:16.943313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.759 [2024-12-12 16:15:16.943505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:50.759 [2024-12-12 16:15:16.943521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:50.759 [2024-12-12 16:15:16.943600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:50.759 [2024-12-12 16:15:16.943675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:50.759 [2024-12-12 16:15:16.943688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:50.759 [2024-12-12 16:15:16.943753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.759 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.760 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.760 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.760 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.760 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.760 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.760 "name": "raid_bdev1", 00:18:50.760 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:50.760 "strip_size_kb": 0, 00:18:50.760 "state": "online", 00:18:50.760 "raid_level": "raid1", 00:18:50.760 "superblock": true, 00:18:50.760 "num_base_bdevs": 2, 00:18:50.760 "num_base_bdevs_discovered": 2, 00:18:50.760 "num_base_bdevs_operational": 2, 00:18:50.760 "base_bdevs_list": [ 00:18:50.760 { 00:18:50.760 "name": "BaseBdev1", 00:18:50.760 "uuid": "591212ca-8bb3-5584-be26-858bb6956563", 00:18:50.760 "is_configured": true, 00:18:50.760 "data_offset": 256, 00:18:50.760 "data_size": 7936 00:18:50.760 }, 00:18:50.760 { 00:18:50.760 "name": "BaseBdev2", 00:18:50.760 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:50.760 "is_configured": true, 00:18:50.760 "data_offset": 256, 00:18:50.760 "data_size": 7936 00:18:50.760 } 00:18:50.760 ] 00:18:50.760 }' 00:18:50.760 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.760 16:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.019 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:51.019 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:51.019 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.019 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.019 [2024-12-12 16:15:17.369141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.278 [2024-12-12 16:15:17.448723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.278 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.279 "name": "raid_bdev1", 00:18:51.279 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:51.279 "strip_size_kb": 0, 00:18:51.279 "state": "online", 00:18:51.279 "raid_level": "raid1", 00:18:51.279 "superblock": true, 00:18:51.279 "num_base_bdevs": 2, 00:18:51.279 "num_base_bdevs_discovered": 1, 00:18:51.279 "num_base_bdevs_operational": 1, 00:18:51.279 "base_bdevs_list": [ 00:18:51.279 { 00:18:51.279 "name": null, 00:18:51.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.279 "is_configured": false, 00:18:51.279 "data_offset": 0, 00:18:51.279 "data_size": 7936 00:18:51.279 }, 00:18:51.279 { 00:18:51.279 "name": "BaseBdev2", 00:18:51.279 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:51.279 "is_configured": true, 00:18:51.279 "data_offset": 256, 00:18:51.279 "data_size": 7936 00:18:51.279 } 00:18:51.279 ] 00:18:51.279 }' 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.279 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.848 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.848 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.848 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.848 [2024-12-12 16:15:17.915952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.848 [2024-12-12 16:15:17.931643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.848 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.848 16:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:51.848 [2024-12-12 16:15:17.933446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.786 "name": "raid_bdev1", 00:18:52.786 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:52.786 "strip_size_kb": 0, 00:18:52.786 "state": "online", 00:18:52.786 "raid_level": "raid1", 00:18:52.786 "superblock": true, 00:18:52.786 "num_base_bdevs": 2, 00:18:52.786 "num_base_bdevs_discovered": 2, 00:18:52.786 "num_base_bdevs_operational": 2, 00:18:52.786 "process": { 00:18:52.786 "type": "rebuild", 00:18:52.786 "target": "spare", 00:18:52.786 "progress": { 00:18:52.786 "blocks": 2560, 00:18:52.786 "percent": 32 00:18:52.786 } 00:18:52.786 }, 00:18:52.786 "base_bdevs_list": [ 00:18:52.786 { 00:18:52.786 "name": "spare", 00:18:52.786 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:52.786 "is_configured": true, 00:18:52.786 "data_offset": 256, 00:18:52.786 "data_size": 7936 00:18:52.786 }, 00:18:52.786 { 00:18:52.786 "name": "BaseBdev2", 00:18:52.786 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:52.786 "is_configured": true, 00:18:52.786 "data_offset": 256, 00:18:52.786 "data_size": 7936 00:18:52.786 } 00:18:52.786 ] 00:18:52.786 }' 00:18:52.786 16:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.786 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.786 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.786 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.786 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:52.786 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.786 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.786 [2024-12-12 16:15:19.084870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.046 [2024-12-12 16:15:19.138345] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:53.046 [2024-12-12 16:15:19.138400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.046 [2024-12-12 16:15:19.138430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.046 [2024-12-12 16:15:19.138442] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.046 "name": "raid_bdev1", 00:18:53.046 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:53.046 "strip_size_kb": 0, 00:18:53.046 "state": "online", 00:18:53.046 "raid_level": "raid1", 00:18:53.046 "superblock": true, 00:18:53.046 "num_base_bdevs": 2, 00:18:53.046 "num_base_bdevs_discovered": 1, 00:18:53.046 "num_base_bdevs_operational": 1, 00:18:53.046 "base_bdevs_list": [ 00:18:53.046 { 00:18:53.046 "name": null, 00:18:53.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.046 "is_configured": false, 00:18:53.046 "data_offset": 0, 00:18:53.046 "data_size": 7936 00:18:53.046 }, 00:18:53.046 { 00:18:53.046 "name": "BaseBdev2", 00:18:53.046 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:53.046 "is_configured": true, 00:18:53.046 "data_offset": 256, 00:18:53.046 "data_size": 7936 00:18:53.046 } 00:18:53.046 ] 00:18:53.046 }' 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.046 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.305 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.305 "name": "raid_bdev1", 00:18:53.305 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:53.305 "strip_size_kb": 0, 00:18:53.305 "state": "online", 00:18:53.305 "raid_level": "raid1", 00:18:53.305 "superblock": true, 00:18:53.305 "num_base_bdevs": 2, 00:18:53.305 "num_base_bdevs_discovered": 1, 00:18:53.305 "num_base_bdevs_operational": 1, 00:18:53.306 "base_bdevs_list": [ 00:18:53.306 { 00:18:53.306 "name": null, 00:18:53.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.306 "is_configured": false, 00:18:53.306 "data_offset": 0, 00:18:53.306 "data_size": 7936 00:18:53.306 }, 00:18:53.306 { 00:18:53.306 "name": "BaseBdev2", 00:18:53.306 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:53.306 "is_configured": true, 00:18:53.306 "data_offset": 256, 00:18:53.306 "data_size": 7936 00:18:53.306 } 00:18:53.306 ] 00:18:53.306 }' 00:18:53.306 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.564 [2024-12-12 16:15:19.743090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.564 [2024-12-12 16:15:19.757932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.564 16:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:53.564 [2024-12-12 16:15:19.759777] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.500 "name": "raid_bdev1", 00:18:54.500 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:54.500 "strip_size_kb": 0, 00:18:54.500 "state": "online", 00:18:54.500 "raid_level": "raid1", 00:18:54.500 "superblock": true, 00:18:54.500 "num_base_bdevs": 2, 00:18:54.500 "num_base_bdevs_discovered": 2, 00:18:54.500 "num_base_bdevs_operational": 2, 00:18:54.500 "process": { 00:18:54.500 "type": "rebuild", 00:18:54.500 "target": "spare", 00:18:54.500 "progress": { 00:18:54.500 "blocks": 2560, 00:18:54.500 "percent": 32 00:18:54.500 } 00:18:54.500 }, 00:18:54.500 "base_bdevs_list": [ 00:18:54.500 { 00:18:54.500 "name": "spare", 00:18:54.500 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:54.500 "is_configured": true, 00:18:54.500 "data_offset": 256, 00:18:54.500 "data_size": 7936 00:18:54.500 }, 00:18:54.500 { 00:18:54.500 "name": "BaseBdev2", 00:18:54.500 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:54.500 "is_configured": true, 00:18:54.500 "data_offset": 256, 00:18:54.500 "data_size": 7936 00:18:54.500 } 00:18:54.500 ] 00:18:54.500 }' 00:18:54.500 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:54.760 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=748 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.760 "name": "raid_bdev1", 00:18:54.760 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:54.760 "strip_size_kb": 0, 00:18:54.760 "state": "online", 00:18:54.760 "raid_level": "raid1", 00:18:54.760 "superblock": true, 00:18:54.760 "num_base_bdevs": 2, 00:18:54.760 "num_base_bdevs_discovered": 2, 00:18:54.760 "num_base_bdevs_operational": 2, 00:18:54.760 "process": { 00:18:54.760 "type": "rebuild", 00:18:54.760 "target": "spare", 00:18:54.760 "progress": { 00:18:54.760 "blocks": 2816, 00:18:54.760 "percent": 35 00:18:54.760 } 00:18:54.760 }, 00:18:54.760 "base_bdevs_list": [ 00:18:54.760 { 00:18:54.760 "name": "spare", 00:18:54.760 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:54.760 "is_configured": true, 00:18:54.760 "data_offset": 256, 00:18:54.760 "data_size": 7936 00:18:54.760 }, 00:18:54.760 { 00:18:54.760 "name": "BaseBdev2", 00:18:54.760 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:54.760 "is_configured": true, 00:18:54.760 "data_offset": 256, 00:18:54.760 "data_size": 7936 00:18:54.760 } 00:18:54.760 ] 00:18:54.760 }' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.760 16:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.760 16:15:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.760 16:15:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.698 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.698 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.698 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.698 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.698 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.698 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.958 "name": "raid_bdev1", 00:18:55.958 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:55.958 "strip_size_kb": 0, 00:18:55.958 "state": "online", 00:18:55.958 "raid_level": "raid1", 00:18:55.958 "superblock": true, 00:18:55.958 "num_base_bdevs": 2, 00:18:55.958 "num_base_bdevs_discovered": 2, 00:18:55.958 "num_base_bdevs_operational": 2, 00:18:55.958 "process": { 00:18:55.958 "type": "rebuild", 00:18:55.958 "target": "spare", 00:18:55.958 "progress": { 00:18:55.958 "blocks": 5632, 00:18:55.958 "percent": 70 00:18:55.958 } 00:18:55.958 }, 00:18:55.958 "base_bdevs_list": [ 00:18:55.958 { 00:18:55.958 "name": "spare", 00:18:55.958 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:55.958 "is_configured": true, 00:18:55.958 "data_offset": 256, 00:18:55.958 "data_size": 7936 00:18:55.958 }, 00:18:55.958 { 00:18:55.958 "name": "BaseBdev2", 00:18:55.958 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:55.958 "is_configured": true, 00:18:55.958 "data_offset": 256, 00:18:55.958 "data_size": 7936 00:18:55.958 } 00:18:55.958 ] 00:18:55.958 }' 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.958 16:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.527 [2024-12-12 16:15:22.872000] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:56.527 [2024-12-12 16:15:22.872067] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:56.527 [2024-12-12 16:15:22.872161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.096 "name": "raid_bdev1", 00:18:57.096 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:57.096 "strip_size_kb": 0, 00:18:57.096 "state": "online", 00:18:57.096 "raid_level": "raid1", 00:18:57.096 "superblock": true, 00:18:57.096 "num_base_bdevs": 2, 00:18:57.096 "num_base_bdevs_discovered": 2, 00:18:57.096 "num_base_bdevs_operational": 2, 00:18:57.096 "base_bdevs_list": [ 00:18:57.096 { 00:18:57.096 "name": "spare", 00:18:57.096 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:57.096 "is_configured": true, 00:18:57.096 "data_offset": 256, 00:18:57.096 "data_size": 7936 00:18:57.096 }, 00:18:57.096 { 00:18:57.096 "name": "BaseBdev2", 00:18:57.096 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:57.096 "is_configured": true, 00:18:57.096 "data_offset": 256, 00:18:57.096 "data_size": 7936 00:18:57.096 } 00:18:57.096 ] 00:18:57.096 }' 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.096 "name": "raid_bdev1", 00:18:57.096 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:57.096 "strip_size_kb": 0, 00:18:57.096 "state": "online", 00:18:57.096 "raid_level": "raid1", 00:18:57.096 "superblock": true, 00:18:57.096 "num_base_bdevs": 2, 00:18:57.096 "num_base_bdevs_discovered": 2, 00:18:57.096 "num_base_bdevs_operational": 2, 00:18:57.096 "base_bdevs_list": [ 00:18:57.096 { 00:18:57.096 "name": "spare", 00:18:57.096 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:57.096 "is_configured": true, 00:18:57.096 "data_offset": 256, 00:18:57.096 "data_size": 7936 00:18:57.096 }, 00:18:57.096 { 00:18:57.096 "name": "BaseBdev2", 00:18:57.096 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:57.096 "is_configured": true, 00:18:57.096 "data_offset": 256, 00:18:57.096 "data_size": 7936 00:18:57.096 } 00:18:57.096 ] 00:18:57.096 }' 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.096 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.356 "name": "raid_bdev1", 00:18:57.356 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:57.356 "strip_size_kb": 0, 00:18:57.356 "state": "online", 00:18:57.356 "raid_level": "raid1", 00:18:57.356 "superblock": true, 00:18:57.356 "num_base_bdevs": 2, 00:18:57.356 "num_base_bdevs_discovered": 2, 00:18:57.356 "num_base_bdevs_operational": 2, 00:18:57.356 "base_bdevs_list": [ 00:18:57.356 { 00:18:57.356 "name": "spare", 00:18:57.356 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:57.356 "is_configured": true, 00:18:57.356 "data_offset": 256, 00:18:57.356 "data_size": 7936 00:18:57.356 }, 00:18:57.356 { 00:18:57.356 "name": "BaseBdev2", 00:18:57.356 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:57.356 "is_configured": true, 00:18:57.356 "data_offset": 256, 00:18:57.356 "data_size": 7936 00:18:57.356 } 00:18:57.356 ] 00:18:57.356 }' 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.356 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.616 [2024-12-12 16:15:23.931997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.616 [2024-12-12 16:15:23.932033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.616 [2024-12-12 16:15:23.932113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.616 [2024-12-12 16:15:23.932178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.616 [2024-12-12 16:15:23.932187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:57.616 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.876 16:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.876 [2024-12-12 16:15:24.007874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.876 [2024-12-12 16:15:24.007950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.876 [2024-12-12 16:15:24.007970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:57.876 [2024-12-12 16:15:24.007979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.876 [2024-12-12 16:15:24.009750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.876 [2024-12-12 16:15:24.009788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.876 [2024-12-12 16:15:24.009840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:57.876 [2024-12-12 16:15:24.009906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.876 [2024-12-12 16:15:24.010028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.876 spare 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.876 [2024-12-12 16:15:24.109920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:57.876 [2024-12-12 16:15:24.109949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:57.876 [2024-12-12 16:15:24.110031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:57.876 [2024-12-12 16:15:24.110107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:57.876 [2024-12-12 16:15:24.110117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:57.876 [2024-12-12 16:15:24.110187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.876 "name": "raid_bdev1", 00:18:57.876 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:57.876 "strip_size_kb": 0, 00:18:57.876 "state": "online", 00:18:57.876 "raid_level": "raid1", 00:18:57.876 "superblock": true, 00:18:57.876 "num_base_bdevs": 2, 00:18:57.876 "num_base_bdevs_discovered": 2, 00:18:57.876 "num_base_bdevs_operational": 2, 00:18:57.876 "base_bdevs_list": [ 00:18:57.876 { 00:18:57.876 "name": "spare", 00:18:57.876 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:57.876 "is_configured": true, 00:18:57.876 "data_offset": 256, 00:18:57.876 "data_size": 7936 00:18:57.876 }, 00:18:57.876 { 00:18:57.876 "name": "BaseBdev2", 00:18:57.876 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:57.876 "is_configured": true, 00:18:57.876 "data_offset": 256, 00:18:57.876 "data_size": 7936 00:18:57.876 } 00:18:57.876 ] 00:18:57.876 }' 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.876 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.445 "name": "raid_bdev1", 00:18:58.445 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:58.445 "strip_size_kb": 0, 00:18:58.445 "state": "online", 00:18:58.445 "raid_level": "raid1", 00:18:58.445 "superblock": true, 00:18:58.445 "num_base_bdevs": 2, 00:18:58.445 "num_base_bdevs_discovered": 2, 00:18:58.445 "num_base_bdevs_operational": 2, 00:18:58.445 "base_bdevs_list": [ 00:18:58.445 { 00:18:58.445 "name": "spare", 00:18:58.445 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:18:58.445 "is_configured": true, 00:18:58.445 "data_offset": 256, 00:18:58.445 "data_size": 7936 00:18:58.445 }, 00:18:58.445 { 00:18:58.445 "name": "BaseBdev2", 00:18:58.445 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:58.445 "is_configured": true, 00:18:58.445 "data_offset": 256, 00:18:58.445 "data_size": 7936 00:18:58.445 } 00:18:58.445 ] 00:18:58.445 }' 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.445 [2024-12-12 16:15:24.782846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.445 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.705 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.705 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.705 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.705 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.705 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.705 "name": "raid_bdev1", 00:18:58.705 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:18:58.705 "strip_size_kb": 0, 00:18:58.705 "state": "online", 00:18:58.705 "raid_level": "raid1", 00:18:58.705 "superblock": true, 00:18:58.705 "num_base_bdevs": 2, 00:18:58.705 "num_base_bdevs_discovered": 1, 00:18:58.705 "num_base_bdevs_operational": 1, 00:18:58.705 "base_bdevs_list": [ 00:18:58.705 { 00:18:58.705 "name": null, 00:18:58.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.705 "is_configured": false, 00:18:58.705 "data_offset": 0, 00:18:58.705 "data_size": 7936 00:18:58.705 }, 00:18:58.705 { 00:18:58.705 "name": "BaseBdev2", 00:18:58.705 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:18:58.705 "is_configured": true, 00:18:58.705 "data_offset": 256, 00:18:58.705 "data_size": 7936 00:18:58.705 } 00:18:58.705 ] 00:18:58.705 }' 00:18:58.705 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.705 16:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.968 16:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:58.968 16:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.968 16:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.968 [2024-12-12 16:15:25.198103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.968 [2024-12-12 16:15:25.198256] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.968 [2024-12-12 16:15:25.198273] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:58.968 [2024-12-12 16:15:25.198322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.968 [2024-12-12 16:15:25.213679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:58.968 16:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.968 16:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:58.968 [2024-12-12 16:15:25.215427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.905 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.164 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.164 "name": "raid_bdev1", 00:19:00.165 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:00.165 "strip_size_kb": 0, 00:19:00.165 "state": "online", 00:19:00.165 "raid_level": "raid1", 00:19:00.165 "superblock": true, 00:19:00.165 "num_base_bdevs": 2, 00:19:00.165 "num_base_bdevs_discovered": 2, 00:19:00.165 "num_base_bdevs_operational": 2, 00:19:00.165 "process": { 00:19:00.165 "type": "rebuild", 00:19:00.165 "target": "spare", 00:19:00.165 "progress": { 00:19:00.165 "blocks": 2560, 00:19:00.165 "percent": 32 00:19:00.165 } 00:19:00.165 }, 00:19:00.165 "base_bdevs_list": [ 00:19:00.165 { 00:19:00.165 "name": "spare", 00:19:00.165 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:19:00.165 "is_configured": true, 00:19:00.165 "data_offset": 256, 00:19:00.165 "data_size": 7936 00:19:00.165 }, 00:19:00.165 { 00:19:00.165 "name": "BaseBdev2", 00:19:00.165 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:00.165 "is_configured": true, 00:19:00.165 "data_offset": 256, 00:19:00.165 "data_size": 7936 00:19:00.165 } 00:19:00.165 ] 00:19:00.165 }' 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.165 [2024-12-12 16:15:26.355298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.165 [2024-12-12 16:15:26.420231] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:00.165 [2024-12-12 16:15:26.420297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.165 [2024-12-12 16:15:26.420312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.165 [2024-12-12 16:15:26.420320] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.165 "name": "raid_bdev1", 00:19:00.165 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:00.165 "strip_size_kb": 0, 00:19:00.165 "state": "online", 00:19:00.165 "raid_level": "raid1", 00:19:00.165 "superblock": true, 00:19:00.165 "num_base_bdevs": 2, 00:19:00.165 "num_base_bdevs_discovered": 1, 00:19:00.165 "num_base_bdevs_operational": 1, 00:19:00.165 "base_bdevs_list": [ 00:19:00.165 { 00:19:00.165 "name": null, 00:19:00.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.165 "is_configured": false, 00:19:00.165 "data_offset": 0, 00:19:00.165 "data_size": 7936 00:19:00.165 }, 00:19:00.165 { 00:19:00.165 "name": "BaseBdev2", 00:19:00.165 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:00.165 "is_configured": true, 00:19:00.165 "data_offset": 256, 00:19:00.165 "data_size": 7936 00:19:00.165 } 00:19:00.165 ] 00:19:00.165 }' 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.165 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.734 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.734 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.734 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.734 [2024-12-12 16:15:26.893180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.734 [2024-12-12 16:15:26.893263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.734 [2024-12-12 16:15:26.893290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:00.734 [2024-12-12 16:15:26.893301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.734 [2024-12-12 16:15:26.893478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.734 [2024-12-12 16:15:26.893510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.734 [2024-12-12 16:15:26.893560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:00.734 [2024-12-12 16:15:26.893590] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.734 [2024-12-12 16:15:26.893599] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:00.734 [2024-12-12 16:15:26.893619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.734 [2024-12-12 16:15:26.908953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:00.734 spare 00:19:00.734 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.734 [2024-12-12 16:15:26.910649] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:00.734 16:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.672 "name": "raid_bdev1", 00:19:01.672 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:01.672 "strip_size_kb": 0, 00:19:01.672 "state": "online", 00:19:01.672 "raid_level": "raid1", 00:19:01.672 "superblock": true, 00:19:01.672 "num_base_bdevs": 2, 00:19:01.672 "num_base_bdevs_discovered": 2, 00:19:01.672 "num_base_bdevs_operational": 2, 00:19:01.672 "process": { 00:19:01.672 "type": "rebuild", 00:19:01.672 "target": "spare", 00:19:01.672 "progress": { 00:19:01.672 "blocks": 2560, 00:19:01.672 "percent": 32 00:19:01.672 } 00:19:01.672 }, 00:19:01.672 "base_bdevs_list": [ 00:19:01.672 { 00:19:01.672 "name": "spare", 00:19:01.672 "uuid": "668f9c7d-c898-5a26-aff0-b16363d71bdd", 00:19:01.672 "is_configured": true, 00:19:01.672 "data_offset": 256, 00:19:01.672 "data_size": 7936 00:19:01.672 }, 00:19:01.672 { 00:19:01.672 "name": "BaseBdev2", 00:19:01.672 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:01.672 "is_configured": true, 00:19:01.672 "data_offset": 256, 00:19:01.672 "data_size": 7936 00:19:01.672 } 00:19:01.672 ] 00:19:01.672 }' 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.672 16:15:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.932 [2024-12-12 16:15:28.031684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.932 [2024-12-12 16:15:28.115485] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.932 [2024-12-12 16:15:28.115553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.932 [2024-12-12 16:15:28.115569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.932 [2024-12-12 16:15:28.115576] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.932 "name": "raid_bdev1", 00:19:01.932 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:01.932 "strip_size_kb": 0, 00:19:01.932 "state": "online", 00:19:01.932 "raid_level": "raid1", 00:19:01.932 "superblock": true, 00:19:01.932 "num_base_bdevs": 2, 00:19:01.932 "num_base_bdevs_discovered": 1, 00:19:01.932 "num_base_bdevs_operational": 1, 00:19:01.932 "base_bdevs_list": [ 00:19:01.932 { 00:19:01.932 "name": null, 00:19:01.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.932 "is_configured": false, 00:19:01.932 "data_offset": 0, 00:19:01.932 "data_size": 7936 00:19:01.932 }, 00:19:01.932 { 00:19:01.932 "name": "BaseBdev2", 00:19:01.932 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:01.932 "is_configured": true, 00:19:01.932 "data_offset": 256, 00:19:01.932 "data_size": 7936 00:19:01.932 } 00:19:01.932 ] 00:19:01.932 }' 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.932 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.500 "name": "raid_bdev1", 00:19:02.500 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:02.500 "strip_size_kb": 0, 00:19:02.500 "state": "online", 00:19:02.500 "raid_level": "raid1", 00:19:02.500 "superblock": true, 00:19:02.500 "num_base_bdevs": 2, 00:19:02.500 "num_base_bdevs_discovered": 1, 00:19:02.500 "num_base_bdevs_operational": 1, 00:19:02.500 "base_bdevs_list": [ 00:19:02.500 { 00:19:02.500 "name": null, 00:19:02.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.500 "is_configured": false, 00:19:02.500 "data_offset": 0, 00:19:02.500 "data_size": 7936 00:19:02.500 }, 00:19:02.500 { 00:19:02.500 "name": "BaseBdev2", 00:19:02.500 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:02.500 "is_configured": true, 00:19:02.500 "data_offset": 256, 00:19:02.500 "data_size": 7936 00:19:02.500 } 00:19:02.500 ] 00:19:02.500 }' 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.500 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.500 [2024-12-12 16:15:28.792448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:02.500 [2024-12-12 16:15:28.792504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.500 [2024-12-12 16:15:28.792524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:02.500 [2024-12-12 16:15:28.792533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.500 [2024-12-12 16:15:28.792713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.500 [2024-12-12 16:15:28.792735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:02.500 [2024-12-12 16:15:28.792783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:02.500 [2024-12-12 16:15:28.792795] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:02.500 [2024-12-12 16:15:28.792805] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:02.501 [2024-12-12 16:15:28.792814] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:02.501 BaseBdev1 00:19:02.501 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.501 16:15:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.880 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.880 "name": "raid_bdev1", 00:19:03.880 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:03.880 "strip_size_kb": 0, 00:19:03.880 "state": "online", 00:19:03.880 "raid_level": "raid1", 00:19:03.880 "superblock": true, 00:19:03.880 "num_base_bdevs": 2, 00:19:03.880 "num_base_bdevs_discovered": 1, 00:19:03.880 "num_base_bdevs_operational": 1, 00:19:03.880 "base_bdevs_list": [ 00:19:03.880 { 00:19:03.880 "name": null, 00:19:03.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.880 "is_configured": false, 00:19:03.880 "data_offset": 0, 00:19:03.880 "data_size": 7936 00:19:03.880 }, 00:19:03.880 { 00:19:03.881 "name": "BaseBdev2", 00:19:03.881 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:03.881 "is_configured": true, 00:19:03.881 "data_offset": 256, 00:19:03.881 "data_size": 7936 00:19:03.881 } 00:19:03.881 ] 00:19:03.881 }' 00:19:03.881 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.881 16:15:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.140 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.140 "name": "raid_bdev1", 00:19:04.140 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:04.140 "strip_size_kb": 0, 00:19:04.140 "state": "online", 00:19:04.140 "raid_level": "raid1", 00:19:04.140 "superblock": true, 00:19:04.140 "num_base_bdevs": 2, 00:19:04.140 "num_base_bdevs_discovered": 1, 00:19:04.140 "num_base_bdevs_operational": 1, 00:19:04.140 "base_bdevs_list": [ 00:19:04.140 { 00:19:04.140 "name": null, 00:19:04.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.140 "is_configured": false, 00:19:04.140 "data_offset": 0, 00:19:04.141 "data_size": 7936 00:19:04.141 }, 00:19:04.141 { 00:19:04.141 "name": "BaseBdev2", 00:19:04.141 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:04.141 "is_configured": true, 00:19:04.141 "data_offset": 256, 00:19:04.141 "data_size": 7936 00:19:04.141 } 00:19:04.141 ] 00:19:04.141 }' 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.141 [2024-12-12 16:15:30.406084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.141 [2024-12-12 16:15:30.406245] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:04.141 [2024-12-12 16:15:30.406262] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:04.141 request: 00:19:04.141 { 00:19:04.141 "base_bdev": "BaseBdev1", 00:19:04.141 "raid_bdev": "raid_bdev1", 00:19:04.141 "method": "bdev_raid_add_base_bdev", 00:19:04.141 "req_id": 1 00:19:04.141 } 00:19:04.141 Got JSON-RPC error response 00:19:04.141 response: 00:19:04.141 { 00:19:04.141 "code": -22, 00:19:04.141 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:04.141 } 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:04.141 16:15:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.079 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.338 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.338 "name": "raid_bdev1", 00:19:05.338 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:05.338 "strip_size_kb": 0, 00:19:05.338 "state": "online", 00:19:05.338 "raid_level": "raid1", 00:19:05.338 "superblock": true, 00:19:05.338 "num_base_bdevs": 2, 00:19:05.338 "num_base_bdevs_discovered": 1, 00:19:05.338 "num_base_bdevs_operational": 1, 00:19:05.338 "base_bdevs_list": [ 00:19:05.338 { 00:19:05.338 "name": null, 00:19:05.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.338 "is_configured": false, 00:19:05.338 "data_offset": 0, 00:19:05.338 "data_size": 7936 00:19:05.338 }, 00:19:05.339 { 00:19:05.339 "name": "BaseBdev2", 00:19:05.339 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:05.339 "is_configured": true, 00:19:05.339 "data_offset": 256, 00:19:05.339 "data_size": 7936 00:19:05.339 } 00:19:05.339 ] 00:19:05.339 }' 00:19:05.339 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.339 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.598 "name": "raid_bdev1", 00:19:05.598 "uuid": "ab4585d5-d19b-416a-b0f4-1949467552d8", 00:19:05.598 "strip_size_kb": 0, 00:19:05.598 "state": "online", 00:19:05.598 "raid_level": "raid1", 00:19:05.598 "superblock": true, 00:19:05.598 "num_base_bdevs": 2, 00:19:05.598 "num_base_bdevs_discovered": 1, 00:19:05.598 "num_base_bdevs_operational": 1, 00:19:05.598 "base_bdevs_list": [ 00:19:05.598 { 00:19:05.598 "name": null, 00:19:05.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.598 "is_configured": false, 00:19:05.598 "data_offset": 0, 00:19:05.598 "data_size": 7936 00:19:05.598 }, 00:19:05.598 { 00:19:05.598 "name": "BaseBdev2", 00:19:05.598 "uuid": "db9681a2-23c7-5055-92c3-781279299114", 00:19:05.598 "is_configured": true, 00:19:05.598 "data_offset": 256, 00:19:05.598 "data_size": 7936 00:19:05.598 } 00:19:05.598 ] 00:19:05.598 }' 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:05.598 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.858 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:05.858 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 91121 00:19:05.858 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 91121 ']' 00:19:05.858 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 91121 00:19:05.858 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:05.858 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.858 16:15:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91121 00:19:05.858 16:15:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.858 16:15:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.858 killing process with pid 91121 00:19:05.858 16:15:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91121' 00:19:05.858 16:15:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 91121 00:19:05.858 Received shutdown signal, test time was about 60.000000 seconds 00:19:05.858 00:19:05.858 Latency(us) 00:19:05.858 [2024-12-12T16:15:32.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.858 [2024-12-12T16:15:32.210Z] =================================================================================================================== 00:19:05.858 [2024-12-12T16:15:32.210Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.858 [2024-12-12 16:15:32.013157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.858 [2024-12-12 16:15:32.013276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.858 16:15:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 91121 00:19:05.858 [2024-12-12 16:15:32.013327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.858 [2024-12-12 16:15:32.013339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:06.117 [2024-12-12 16:15:32.294220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.056 16:15:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:07.056 00:19:07.056 real 0m17.506s 00:19:07.056 user 0m22.882s 00:19:07.056 sys 0m1.755s 00:19:07.056 16:15:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.056 16:15:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.056 ************************************ 00:19:07.056 END TEST raid_rebuild_test_sb_md_interleaved 00:19:07.056 ************************************ 00:19:07.056 16:15:33 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:07.056 16:15:33 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:07.056 16:15:33 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 91121 ']' 00:19:07.056 16:15:33 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 91121 00:19:07.315 16:15:33 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:07.315 00:19:07.315 real 12m11.229s 00:19:07.315 user 16m14.847s 00:19:07.315 sys 1m58.760s 00:19:07.315 16:15:33 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.315 16:15:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.315 ************************************ 00:19:07.315 END TEST bdev_raid 00:19:07.315 ************************************ 00:19:07.315 16:15:33 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:07.315 16:15:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.315 16:15:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.315 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:07.315 ************************************ 00:19:07.316 START TEST spdkcli_raid 00:19:07.316 ************************************ 00:19:07.316 16:15:33 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:07.316 * Looking for test storage... 00:19:07.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:07.316 16:15:33 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.316 16:15:33 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.316 16:15:33 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.576 16:15:33 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.576 --rc genhtml_branch_coverage=1 00:19:07.576 --rc genhtml_function_coverage=1 00:19:07.576 --rc genhtml_legend=1 00:19:07.576 --rc geninfo_all_blocks=1 00:19:07.576 --rc geninfo_unexecuted_blocks=1 00:19:07.576 00:19:07.576 ' 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.576 --rc genhtml_branch_coverage=1 00:19:07.576 --rc genhtml_function_coverage=1 00:19:07.576 --rc genhtml_legend=1 00:19:07.576 --rc geninfo_all_blocks=1 00:19:07.576 --rc geninfo_unexecuted_blocks=1 00:19:07.576 00:19:07.576 ' 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.576 --rc genhtml_branch_coverage=1 00:19:07.576 --rc genhtml_function_coverage=1 00:19:07.576 --rc genhtml_legend=1 00:19:07.576 --rc geninfo_all_blocks=1 00:19:07.576 --rc geninfo_unexecuted_blocks=1 00:19:07.576 00:19:07.576 ' 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.576 --rc genhtml_branch_coverage=1 00:19:07.576 --rc genhtml_function_coverage=1 00:19:07.576 --rc genhtml_legend=1 00:19:07.576 --rc geninfo_all_blocks=1 00:19:07.576 --rc geninfo_unexecuted_blocks=1 00:19:07.576 00:19:07.576 ' 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:07.576 16:15:33 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=91797 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:07.576 16:15:33 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 91797 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 91797 ']' 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.576 16:15:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.576 [2024-12-12 16:15:33.870588] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:07.576 [2024-12-12 16:15:33.871279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91797 ] 00:19:07.836 [2024-12-12 16:15:34.048136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:07.836 [2024-12-12 16:15:34.153652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.836 [2024-12-12 16:15:34.153715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.775 16:15:34 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.775 16:15:34 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:08.775 16:15:34 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:08.775 16:15:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.775 16:15:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.775 16:15:35 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:08.775 16:15:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.775 16:15:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.775 16:15:35 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:08.775 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:08.775 ' 00:19:10.681 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:10.681 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:10.681 16:15:36 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:10.681 16:15:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.681 16:15:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.681 16:15:36 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:10.681 16:15:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.681 16:15:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.681 16:15:36 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:10.681 ' 00:19:11.644 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:11.644 16:15:37 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:11.644 16:15:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.644 16:15:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.644 16:15:37 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:11.644 16:15:37 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.644 16:15:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.644 16:15:37 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:11.644 16:15:37 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:12.248 16:15:38 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:12.248 16:15:38 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:12.248 16:15:38 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:12.248 16:15:38 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.248 16:15:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.249 16:15:38 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:12.249 16:15:38 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.249 16:15:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.249 16:15:38 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:12.249 ' 00:19:13.188 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:13.447 16:15:39 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:13.447 16:15:39 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.447 16:15:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.447 16:15:39 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:13.447 16:15:39 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.447 16:15:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.447 16:15:39 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:13.447 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:13.447 ' 00:19:14.825 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:14.825 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:14.825 16:15:41 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:14.825 16:15:41 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.825 16:15:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.085 16:15:41 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 91797 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91797 ']' 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91797 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91797 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91797' 00:19:15.085 killing process with pid 91797 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 91797 00:19:15.085 16:15:41 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 91797 00:19:17.630 16:15:43 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:17.630 16:15:43 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 91797 ']' 00:19:17.630 16:15:43 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 91797 00:19:17.630 16:15:43 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91797 ']' 00:19:17.630 16:15:43 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91797 00:19:17.630 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91797) - No such process 00:19:17.630 Process with pid 91797 is not found 00:19:17.630 16:15:43 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 91797 is not found' 00:19:17.630 16:15:43 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:17.630 16:15:43 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:17.630 16:15:43 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:17.630 16:15:43 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:17.630 00:19:17.630 real 0m10.257s 00:19:17.630 user 0m21.075s 00:19:17.630 sys 0m1.159s 00:19:17.630 16:15:43 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.630 16:15:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.630 ************************************ 00:19:17.630 END TEST spdkcli_raid 00:19:17.630 ************************************ 00:19:17.630 16:15:43 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:17.630 16:15:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.630 16:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.630 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:17.630 ************************************ 00:19:17.630 START TEST blockdev_raid5f 00:19:17.630 ************************************ 00:19:17.630 16:15:43 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:17.630 * Looking for test storage... 00:19:17.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:17.630 16:15:43 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.630 16:15:43 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.630 16:15:43 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.890 16:15:44 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.890 16:15:44 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.891 16:15:44 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.891 --rc genhtml_branch_coverage=1 00:19:17.891 --rc genhtml_function_coverage=1 00:19:17.891 --rc genhtml_legend=1 00:19:17.891 --rc geninfo_all_blocks=1 00:19:17.891 --rc geninfo_unexecuted_blocks=1 00:19:17.891 00:19:17.891 ' 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.891 --rc genhtml_branch_coverage=1 00:19:17.891 --rc genhtml_function_coverage=1 00:19:17.891 --rc genhtml_legend=1 00:19:17.891 --rc geninfo_all_blocks=1 00:19:17.891 --rc geninfo_unexecuted_blocks=1 00:19:17.891 00:19:17.891 ' 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.891 --rc genhtml_branch_coverage=1 00:19:17.891 --rc genhtml_function_coverage=1 00:19:17.891 --rc genhtml_legend=1 00:19:17.891 --rc geninfo_all_blocks=1 00:19:17.891 --rc geninfo_unexecuted_blocks=1 00:19:17.891 00:19:17.891 ' 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.891 --rc genhtml_branch_coverage=1 00:19:17.891 --rc genhtml_function_coverage=1 00:19:17.891 --rc genhtml_legend=1 00:19:17.891 --rc geninfo_all_blocks=1 00:19:17.891 --rc geninfo_unexecuted_blocks=1 00:19:17.891 00:19:17.891 ' 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=92077 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:17.891 16:15:44 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 92077 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 92077 ']' 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.891 16:15:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:17.891 [2024-12-12 16:15:44.203167] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:17.891 [2024-12-12 16:15:44.203292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92077 ] 00:19:18.151 [2024-12-12 16:15:44.383312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.411 [2024-12-12 16:15:44.514758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:19.355 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:19.355 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:19.355 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.355 Malloc0 00:19:19.355 Malloc1 00:19:19.355 Malloc2 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.355 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.355 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:19.355 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.355 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.355 16:15:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "145d0159-8404-482a-8aba-c71fa8cf91ae"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "145d0159-8404-482a-8aba-c71fa8cf91ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "145d0159-8404-482a-8aba-c71fa8cf91ae",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b784e3f3-b798-4cc7-b061-973aa8573d17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5d4cce38-b6d4-4545-8427-da48932d8847",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b9e1a265-4343-4759-8e2f-ed27a62779fe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:19.615 16:15:45 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 92077 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 92077 ']' 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 92077 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92077 00:19:19.615 killing process with pid 92077 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92077' 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 92077 00:19:19.615 16:15:45 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 92077 00:19:22.909 16:15:48 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:22.909 16:15:48 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:22.909 16:15:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:22.909 16:15:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.909 16:15:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:22.909 ************************************ 00:19:22.909 START TEST bdev_hello_world 00:19:22.909 ************************************ 00:19:22.909 16:15:48 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:22.909 [2024-12-12 16:15:48.746170] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:22.909 [2024-12-12 16:15:48.746281] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92150 ] 00:19:22.909 [2024-12-12 16:15:48.925127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.909 [2024-12-12 16:15:49.058113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.478 [2024-12-12 16:15:49.680377] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:23.478 [2024-12-12 16:15:49.680449] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:23.478 [2024-12-12 16:15:49.680467] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:23.478 [2024-12-12 16:15:49.680996] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:23.478 [2024-12-12 16:15:49.681147] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:23.478 [2024-12-12 16:15:49.681165] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:23.478 [2024-12-12 16:15:49.681217] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:23.478 00:19:23.478 [2024-12-12 16:15:49.681237] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:24.859 ************************************ 00:19:24.859 END TEST bdev_hello_world 00:19:24.859 ************************************ 00:19:24.859 00:19:24.859 real 0m2.474s 00:19:24.859 user 0m2.001s 00:19:24.859 sys 0m0.343s 00:19:24.859 16:15:51 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.859 16:15:51 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:25.119 16:15:51 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:25.119 16:15:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:25.119 16:15:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.119 16:15:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:25.119 ************************************ 00:19:25.119 START TEST bdev_bounds 00:19:25.119 ************************************ 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=92192 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 92192' 00:19:25.119 Process bdevio pid: 92192 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 92192 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 92192 ']' 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.119 16:15:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:25.119 [2024-12-12 16:15:51.316788] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:25.119 [2024-12-12 16:15:51.316933] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92192 ] 00:19:25.379 [2024-12-12 16:15:51.491429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.379 [2024-12-12 16:15:51.625318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.379 [2024-12-12 16:15:51.625519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.379 [2024-12-12 16:15:51.625536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.948 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.948 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:25.948 16:15:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:26.208 I/O targets: 00:19:26.208 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:26.208 00:19:26.208 00:19:26.208 CUnit - A unit testing framework for C - Version 2.1-3 00:19:26.208 http://cunit.sourceforge.net/ 00:19:26.208 00:19:26.208 00:19:26.208 Suite: bdevio tests on: raid5f 00:19:26.208 Test: blockdev write read block ...passed 00:19:26.208 Test: blockdev write zeroes read block ...passed 00:19:26.208 Test: blockdev write zeroes read no split ...passed 00:19:26.208 Test: blockdev write zeroes read split ...passed 00:19:26.208 Test: blockdev write zeroes read split partial ...passed 00:19:26.208 Test: blockdev reset ...passed 00:19:26.468 Test: blockdev write read 8 blocks ...passed 00:19:26.468 Test: blockdev write read size > 128k ...passed 00:19:26.468 Test: blockdev write read invalid size ...passed 00:19:26.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.468 Test: blockdev write read max offset ...passed 00:19:26.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.468 Test: blockdev writev readv 8 blocks ...passed 00:19:26.468 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.468 Test: blockdev writev readv block ...passed 00:19:26.468 Test: blockdev writev readv size > 128k ...passed 00:19:26.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.468 Test: blockdev comparev and writev ...passed 00:19:26.468 Test: blockdev nvme passthru rw ...passed 00:19:26.468 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.468 Test: blockdev nvme admin passthru ...passed 00:19:26.468 Test: blockdev copy ...passed 00:19:26.468 00:19:26.468 Run Summary: Type Total Ran Passed Failed Inactive 00:19:26.468 suites 1 1 n/a 0 0 00:19:26.468 tests 23 23 23 0 0 00:19:26.468 asserts 130 130 130 0 n/a 00:19:26.468 00:19:26.468 Elapsed time = 0.608 seconds 00:19:26.468 0 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 92192 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 92192 ']' 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 92192 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92192 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92192' 00:19:26.468 killing process with pid 92192 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 92192 00:19:26.468 16:15:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 92192 00:19:27.849 16:15:54 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:27.849 00:19:27.849 real 0m2.870s 00:19:27.849 user 0m7.007s 00:19:27.849 sys 0m0.471s 00:19:27.849 16:15:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.849 16:15:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:27.849 ************************************ 00:19:27.849 END TEST bdev_bounds 00:19:27.849 ************************************ 00:19:27.849 16:15:54 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:27.849 16:15:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:27.849 16:15:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.849 16:15:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:27.849 ************************************ 00:19:27.849 START TEST bdev_nbd 00:19:27.849 ************************************ 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=92252 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 92252 /var/tmp/spdk-nbd.sock 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 92252 ']' 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:27.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.849 16:15:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:28.110 [2024-12-12 16:15:54.271973] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:28.110 [2024-12-12 16:15:54.272084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.110 [2024-12-12 16:15:54.450085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.370 [2024-12-12 16:15:54.578220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:28.947 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:28.948 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:28.948 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:28.948 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.207 1+0 records in 00:19:29.207 1+0 records out 00:19:29.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00247093 s, 1.7 MB/s 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:29.207 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:29.466 { 00:19:29.466 "nbd_device": "/dev/nbd0", 00:19:29.466 "bdev_name": "raid5f" 00:19:29.466 } 00:19:29.466 ]' 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:29.466 { 00:19:29.466 "nbd_device": "/dev/nbd0", 00:19:29.466 "bdev_name": "raid5f" 00:19:29.466 } 00:19:29.466 ]' 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.466 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:29.725 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.725 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.725 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.725 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.726 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.726 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.726 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.726 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.726 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:29.726 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.726 16:15:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:29.985 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:29.985 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:29.985 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:29.985 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:29.986 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:30.246 /dev/nbd0 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.246 1+0 records in 00:19:30.246 1+0 records out 00:19:30.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559159 s, 7.3 MB/s 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.246 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:30.506 { 00:19:30.506 "nbd_device": "/dev/nbd0", 00:19:30.506 "bdev_name": "raid5f" 00:19:30.506 } 00:19:30.506 ]' 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:30.506 { 00:19:30.506 "nbd_device": "/dev/nbd0", 00:19:30.506 "bdev_name": "raid5f" 00:19:30.506 } 00:19:30.506 ]' 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:30.506 256+0 records in 00:19:30.506 256+0 records out 00:19:30.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124583 s, 84.2 MB/s 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:30.506 256+0 records in 00:19:30.506 256+0 records out 00:19:30.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336216 s, 31.2 MB/s 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.506 16:15:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.766 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:31.026 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:31.286 malloc_lvol_verify 00:19:31.286 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:31.546 32eb2d85-93a9-42c9-9321-5d45ae0c47be 00:19:31.546 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:31.546 51a0450d-827d-4d53-8039-675822cadb9a 00:19:31.806 16:15:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:31.806 /dev/nbd0 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:31.806 mke2fs 1.47.0 (5-Feb-2023) 00:19:31.806 Discarding device blocks: 0/4096 done 00:19:31.806 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:31.806 00:19:31.806 Allocating group tables: 0/1 done 00:19:31.806 Writing inode tables: 0/1 done 00:19:31.806 Creating journal (1024 blocks): done 00:19:31.806 Writing superblocks and filesystem accounting information: 0/1 done 00:19:31.806 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:31.806 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.807 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:31.807 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.807 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 92252 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 92252 ']' 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 92252 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92252 00:19:32.067 killing process with pid 92252 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92252' 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 92252 00:19:32.067 16:15:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 92252 00:19:33.976 16:15:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:33.976 00:19:33.976 real 0m5.748s 00:19:33.976 user 0m7.548s 00:19:33.976 sys 0m1.408s 00:19:33.976 16:15:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.976 ************************************ 00:19:33.976 END TEST bdev_nbd 00:19:33.976 ************************************ 00:19:33.976 16:15:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:33.976 16:15:59 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:33.976 16:15:59 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:33.976 16:15:59 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:33.976 16:15:59 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:33.976 16:15:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.976 16:15:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.976 16:15:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:33.976 ************************************ 00:19:33.976 START TEST bdev_fio 00:19:33.976 ************************************ 00:19:33.976 16:15:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:33.976 16:15:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:33.976 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:33.976 16:15:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:33.976 16:15:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:33.976 16:15:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:33.976 16:15:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:33.976 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:33.977 ************************************ 00:19:33.977 START TEST bdev_fio_rw_verify 00:19:33.977 ************************************ 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:33.977 16:16:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:34.237 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:34.237 fio-3.35 00:19:34.237 Starting 1 thread 00:19:46.456 00:19:46.456 job_raid5f: (groupid=0, jobs=1): err= 0: pid=92458: Thu Dec 12 16:16:11 2024 00:19:46.456 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec) 00:19:46.456 slat (nsec): min=17639, max=68613, avg=19344.26, stdev=1994.70 00:19:46.456 clat (usec): min=11, max=332, avg=131.47, stdev=45.88 00:19:46.456 lat (usec): min=30, max=359, avg=150.81, stdev=46.14 00:19:46.456 clat percentiles (usec): 00:19:46.456 | 50.000th=[ 135], 99.000th=[ 219], 99.900th=[ 251], 99.990th=[ 302], 00:19:46.456 | 99.999th=[ 326] 00:19:46.456 write: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(500MiB/9874msec); 0 zone resets 00:19:46.456 slat (usec): min=7, max=143, avg=15.98, stdev= 3.33 00:19:46.456 clat (usec): min=58, max=1060, avg=297.50, stdev=38.96 00:19:46.456 lat (usec): min=73, max=1204, avg=313.49, stdev=39.76 00:19:46.456 clat percentiles (usec): 00:19:46.456 | 50.000th=[ 302], 99.000th=[ 375], 99.900th=[ 523], 99.990th=[ 947], 00:19:46.456 | 99.999th=[ 1037] 00:19:46.456 bw ( KiB/s): min=48160, max=53424, per=98.61%, avg=51125.05, stdev=1329.32, samples=19 00:19:46.456 iops : min=12040, max=13356, avg=12781.26, stdev=332.33, samples=19 00:19:46.456 lat (usec) : 20=0.01%, 50=0.01%, 100=15.79%, 250=39.54%, 500=44.61% 00:19:46.456 lat (usec) : 750=0.04%, 1000=0.02% 00:19:46.456 lat (msec) : 2=0.01% 00:19:46.456 cpu : usr=98.79%, sys=0.49%, ctx=28, majf=0, minf=10121 00:19:46.456 IO depths : 1=7.6%, 2=19.8%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.456 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.456 issued rwts: total=123403,127975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.456 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:46.456 00:19:46.456 Run status group 0 (all jobs): 00:19:46.456 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:19:46.456 WRITE: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=500MiB (524MB), run=9874-9874msec 00:19:47.024 ----------------------------------------------------- 00:19:47.024 Suppressions used: 00:19:47.024 count bytes template 00:19:47.024 1 7 /usr/src/fio/parse.c 00:19:47.024 739 70944 /usr/src/fio/iolog.c 00:19:47.024 1 8 libtcmalloc_minimal.so 00:19:47.024 1 904 libcrypto.so 00:19:47.024 ----------------------------------------------------- 00:19:47.024 00:19:47.024 00:19:47.024 real 0m12.975s 00:19:47.024 user 0m12.984s 00:19:47.024 sys 0m0.754s 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:47.024 ************************************ 00:19:47.024 END TEST bdev_fio_rw_verify 00:19:47.024 ************************************ 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "145d0159-8404-482a-8aba-c71fa8cf91ae"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "145d0159-8404-482a-8aba-c71fa8cf91ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "145d0159-8404-482a-8aba-c71fa8cf91ae",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b784e3f3-b798-4cc7-b061-973aa8573d17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5d4cce38-b6d4-4545-8427-da48932d8847",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b9e1a265-4343-4759-8e2f-ed27a62779fe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:47.024 /home/vagrant/spdk_repo/spdk 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:47.024 00:19:47.024 real 0m13.285s 00:19:47.024 user 0m13.107s 00:19:47.024 sys 0m0.904s 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.024 16:16:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:47.024 ************************************ 00:19:47.024 END TEST bdev_fio 00:19:47.024 ************************************ 00:19:47.024 16:16:13 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:47.024 16:16:13 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:47.024 16:16:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:47.024 16:16:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.024 16:16:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:47.024 ************************************ 00:19:47.024 START TEST bdev_verify 00:19:47.024 ************************************ 00:19:47.024 16:16:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:47.285 [2024-12-12 16:16:13.447808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:47.285 [2024-12-12 16:16:13.447928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92622 ] 00:19:47.285 [2024-12-12 16:16:13.624279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:47.544 [2024-12-12 16:16:13.760683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.544 [2024-12-12 16:16:13.760711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.114 Running I/O for 5 seconds... 00:19:50.432 10279.00 IOPS, 40.15 MiB/s [2024-12-12T16:16:17.727Z] 10333.50 IOPS, 40.37 MiB/s [2024-12-12T16:16:18.707Z] 10357.00 IOPS, 40.46 MiB/s [2024-12-12T16:16:19.657Z] 10393.00 IOPS, 40.60 MiB/s [2024-12-12T16:16:19.657Z] 10393.60 IOPS, 40.60 MiB/s 00:19:53.305 Latency(us) 00:19:53.305 [2024-12-12T16:16:19.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.305 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:53.305 Verification LBA range: start 0x0 length 0x2000 00:19:53.305 raid5f : 5.02 6279.28 24.53 0.00 0.00 30744.83 445.37 22093.36 00:19:53.305 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:53.305 Verification LBA range: start 0x2000 length 0x2000 00:19:53.305 raid5f : 5.02 4119.01 16.09 0.00 0.00 46913.79 169.92 32968.33 00:19:53.305 [2024-12-12T16:16:19.657Z] =================================================================================================================== 00:19:53.305 [2024-12-12T16:16:19.657Z] Total : 10398.29 40.62 0.00 0.00 37150.26 169.92 32968.33 00:19:54.686 ************************************ 00:19:54.686 END TEST bdev_verify 00:19:54.686 ************************************ 00:19:54.686 00:19:54.686 real 0m7.475s 00:19:54.686 user 0m13.736s 00:19:54.686 sys 0m0.357s 00:19:54.686 16:16:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.686 16:16:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:54.686 16:16:20 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:54.686 16:16:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:54.686 16:16:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.686 16:16:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:54.686 ************************************ 00:19:54.686 START TEST bdev_verify_big_io 00:19:54.686 ************************************ 00:19:54.686 16:16:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:54.686 [2024-12-12 16:16:20.991938] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:54.686 [2024-12-12 16:16:20.992112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92721 ] 00:19:54.946 [2024-12-12 16:16:21.163854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:55.206 [2024-12-12 16:16:21.300167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.206 [2024-12-12 16:16:21.300186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.774 Running I/O for 5 seconds... 00:19:57.653 633.00 IOPS, 39.56 MiB/s [2024-12-12T16:16:25.385Z] 728.50 IOPS, 45.53 MiB/s [2024-12-12T16:16:26.324Z] 739.67 IOPS, 46.23 MiB/s [2024-12-12T16:16:27.263Z] 729.75 IOPS, 45.61 MiB/s [2024-12-12T16:16:27.263Z] 761.60 IOPS, 47.60 MiB/s 00:20:00.911 Latency(us) 00:20:00.911 [2024-12-12T16:16:27.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.911 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:00.911 Verification LBA range: start 0x0 length 0x200 00:20:00.912 raid5f : 5.26 434.83 27.18 0.00 0.00 7398550.60 171.71 324188.56 00:20:00.912 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:00.912 Verification LBA range: start 0x200 length 0x200 00:20:00.912 raid5f : 5.31 334.70 20.92 0.00 0.00 9524012.42 183.34 404777.81 00:20:00.912 [2024-12-12T16:16:27.264Z] =================================================================================================================== 00:20:00.912 [2024-12-12T16:16:27.264Z] Total : 769.53 48.10 0.00 0.00 8328440.14 171.71 404777.81 00:20:02.821 00:20:02.821 real 0m7.773s 00:20:02.821 user 0m14.362s 00:20:02.821 sys 0m0.331s 00:20:02.821 16:16:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.821 16:16:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.821 ************************************ 00:20:02.821 END TEST bdev_verify_big_io 00:20:02.821 ************************************ 00:20:02.821 16:16:28 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.821 16:16:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:02.821 16:16:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.821 16:16:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.821 ************************************ 00:20:02.821 START TEST bdev_write_zeroes 00:20:02.821 ************************************ 00:20:02.821 16:16:28 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.821 [2024-12-12 16:16:28.848359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:02.821 [2024-12-12 16:16:28.848483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92821 ] 00:20:02.821 [2024-12-12 16:16:29.031705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.821 [2024-12-12 16:16:29.164139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.761 Running I/O for 1 seconds... 00:20:04.700 29559.00 IOPS, 115.46 MiB/s 00:20:04.700 Latency(us) 00:20:04.700 [2024-12-12T16:16:31.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.700 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:04.700 raid5f : 1.01 29523.00 115.32 0.00 0.00 4322.87 1416.61 5981.23 00:20:04.700 [2024-12-12T16:16:31.052Z] =================================================================================================================== 00:20:04.700 [2024-12-12T16:16:31.052Z] Total : 29523.00 115.32 0.00 0.00 4322.87 1416.61 5981.23 00:20:06.082 00:20:06.082 real 0m3.464s 00:20:06.082 user 0m2.970s 00:20:06.082 sys 0m0.366s 00:20:06.082 16:16:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.082 ************************************ 00:20:06.082 END TEST bdev_write_zeroes 00:20:06.082 ************************************ 00:20:06.082 16:16:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:06.082 16:16:32 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.082 16:16:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:06.082 16:16:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.082 16:16:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:06.082 ************************************ 00:20:06.082 START TEST bdev_json_nonenclosed 00:20:06.082 ************************************ 00:20:06.082 16:16:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.082 [2024-12-12 16:16:32.379514] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:06.082 [2024-12-12 16:16:32.379708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92877 ] 00:20:06.343 [2024-12-12 16:16:32.556021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.343 [2024-12-12 16:16:32.692973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.343 [2024-12-12 16:16:32.693163] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:06.343 [2024-12-12 16:16:32.693231] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:06.343 [2024-12-12 16:16:32.693269] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:06.602 00:20:06.602 real 0m0.661s 00:20:06.602 user 0m0.405s 00:20:06.602 sys 0m0.151s 00:20:06.602 16:16:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.602 16:16:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:06.602 ************************************ 00:20:06.602 END TEST bdev_json_nonenclosed 00:20:06.602 ************************************ 00:20:06.863 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.863 16:16:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:06.863 16:16:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.863 16:16:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:06.863 ************************************ 00:20:06.863 START TEST bdev_json_nonarray 00:20:06.863 ************************************ 00:20:06.863 16:16:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.863 [2024-12-12 16:16:33.114346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:06.863 [2024-12-12 16:16:33.114512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92904 ] 00:20:07.123 [2024-12-12 16:16:33.289306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.123 [2024-12-12 16:16:33.424766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.123 [2024-12-12 16:16:33.424977] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:07.123 [2024-12-12 16:16:33.425043] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:07.123 [2024-12-12 16:16:33.425121] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:07.382 ************************************ 00:20:07.383 END TEST bdev_json_nonarray 00:20:07.383 ************************************ 00:20:07.383 00:20:07.383 real 0m0.661s 00:20:07.383 user 0m0.416s 00:20:07.383 sys 0m0.139s 00:20:07.383 16:16:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.383 16:16:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:07.643 16:16:33 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:07.643 ************************************ 00:20:07.643 END TEST blockdev_raid5f 00:20:07.643 ************************************ 00:20:07.643 00:20:07.643 real 0m49.920s 00:20:07.643 user 1m6.207s 00:20:07.643 sys 0m5.828s 00:20:07.643 16:16:33 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.643 16:16:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:07.643 16:16:33 -- spdk/autotest.sh@194 -- # uname -s 00:20:07.643 16:16:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:07.643 16:16:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:07.643 16:16:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:07.643 16:16:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:07.643 16:16:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.643 16:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:07.643 16:16:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:07.643 16:16:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:07.643 16:16:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:07.643 16:16:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:07.643 16:16:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:07.643 16:16:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:07.643 16:16:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:07.643 16:16:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.643 16:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:07.643 16:16:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:07.643 16:16:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:07.643 16:16:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:07.643 16:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:10.182 INFO: APP EXITING 00:20:10.183 INFO: killing all VMs 00:20:10.183 INFO: killing vhost app 00:20:10.183 INFO: EXIT DONE 00:20:10.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:10.442 Waiting for block devices as requested 00:20:10.702 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:10.702 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:11.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:11.642 Cleaning 00:20:11.642 Removing: /var/run/dpdk/spdk0/config 00:20:11.642 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:11.642 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:11.642 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:11.642 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:11.642 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:11.642 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:11.642 Removing: /dev/shm/spdk_tgt_trace.pid58804 00:20:11.642 Removing: /var/run/dpdk/spdk0 00:20:11.642 Removing: /var/run/dpdk/spdk_pid58547 00:20:11.642 Removing: /var/run/dpdk/spdk_pid58804 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59044 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59148 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59215 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59349 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59372 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59582 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59694 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59806 00:20:11.642 Removing: /var/run/dpdk/spdk_pid59934 00:20:11.643 Removing: /var/run/dpdk/spdk_pid60048 00:20:11.643 Removing: /var/run/dpdk/spdk_pid60088 00:20:11.903 Removing: /var/run/dpdk/spdk_pid60130 00:20:11.903 Removing: /var/run/dpdk/spdk_pid60206 00:20:11.903 Removing: /var/run/dpdk/spdk_pid60324 00:20:11.903 Removing: /var/run/dpdk/spdk_pid60773 00:20:11.903 Removing: /var/run/dpdk/spdk_pid60854 00:20:11.903 Removing: /var/run/dpdk/spdk_pid60939 00:20:11.903 Removing: /var/run/dpdk/spdk_pid60957 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61117 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61133 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61292 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61314 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61389 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61407 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61471 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61500 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61704 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61740 00:20:11.903 Removing: /var/run/dpdk/spdk_pid61829 00:20:11.903 Removing: /var/run/dpdk/spdk_pid63206 00:20:11.903 Removing: /var/run/dpdk/spdk_pid63417 00:20:11.903 Removing: /var/run/dpdk/spdk_pid63564 00:20:11.903 Removing: /var/run/dpdk/spdk_pid64217 00:20:11.903 Removing: /var/run/dpdk/spdk_pid64423 00:20:11.903 Removing: /var/run/dpdk/spdk_pid64569 00:20:11.903 Removing: /var/run/dpdk/spdk_pid65223 00:20:11.903 Removing: /var/run/dpdk/spdk_pid65553 00:20:11.903 Removing: /var/run/dpdk/spdk_pid65699 00:20:11.903 Removing: /var/run/dpdk/spdk_pid67084 00:20:11.903 Removing: /var/run/dpdk/spdk_pid67343 00:20:11.903 Removing: /var/run/dpdk/spdk_pid67488 00:20:11.903 Removing: /var/run/dpdk/spdk_pid68875 00:20:11.903 Removing: /var/run/dpdk/spdk_pid69134 00:20:11.903 Removing: /var/run/dpdk/spdk_pid69274 00:20:11.903 Removing: /var/run/dpdk/spdk_pid70665 00:20:11.903 Removing: /var/run/dpdk/spdk_pid71115 00:20:11.903 Removing: /var/run/dpdk/spdk_pid71256 00:20:11.903 Removing: /var/run/dpdk/spdk_pid72749 00:20:11.903 Removing: /var/run/dpdk/spdk_pid73021 00:20:11.903 Removing: /var/run/dpdk/spdk_pid73179 00:20:11.903 Removing: /var/run/dpdk/spdk_pid74670 00:20:11.903 Removing: /var/run/dpdk/spdk_pid74930 00:20:11.903 Removing: /var/run/dpdk/spdk_pid75077 00:20:11.903 Removing: /var/run/dpdk/spdk_pid76581 00:20:11.903 Removing: /var/run/dpdk/spdk_pid77074 00:20:11.903 Removing: /var/run/dpdk/spdk_pid77225 00:20:11.903 Removing: /var/run/dpdk/spdk_pid77363 00:20:11.903 Removing: /var/run/dpdk/spdk_pid77793 00:20:11.903 Removing: /var/run/dpdk/spdk_pid78524 00:20:11.903 Removing: /var/run/dpdk/spdk_pid78920 00:20:11.903 Removing: /var/run/dpdk/spdk_pid79633 00:20:11.903 Removing: /var/run/dpdk/spdk_pid80079 00:20:11.903 Removing: /var/run/dpdk/spdk_pid80835 00:20:11.903 Removing: /var/run/dpdk/spdk_pid81244 00:20:11.903 Removing: /var/run/dpdk/spdk_pid83215 00:20:11.903 Removing: /var/run/dpdk/spdk_pid83660 00:20:11.903 Removing: /var/run/dpdk/spdk_pid84103 00:20:11.903 Removing: /var/run/dpdk/spdk_pid86203 00:20:12.163 Removing: /var/run/dpdk/spdk_pid86684 00:20:12.164 Removing: /var/run/dpdk/spdk_pid87200 00:20:12.164 Removing: /var/run/dpdk/spdk_pid88274 00:20:12.164 Removing: /var/run/dpdk/spdk_pid88597 00:20:12.164 Removing: /var/run/dpdk/spdk_pid89534 00:20:12.164 Removing: /var/run/dpdk/spdk_pid89857 00:20:12.164 Removing: /var/run/dpdk/spdk_pid90798 00:20:12.164 Removing: /var/run/dpdk/spdk_pid91121 00:20:12.164 Removing: /var/run/dpdk/spdk_pid91797 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92077 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92150 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92192 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92443 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92622 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92721 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92821 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92877 00:20:12.164 Removing: /var/run/dpdk/spdk_pid92904 00:20:12.164 Clean 00:20:12.164 16:16:38 -- common/autotest_common.sh@1453 -- # return 0 00:20:12.164 16:16:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:12.164 16:16:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.164 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:12.164 16:16:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:12.164 16:16:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.164 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:12.423 16:16:38 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:12.423 16:16:38 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:12.423 16:16:38 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:12.423 16:16:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:12.423 16:16:38 -- spdk/autotest.sh@398 -- # hostname 00:20:12.423 16:16:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:12.423 geninfo: WARNING: invalid characters removed from testname! 00:20:38.988 16:17:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:40.896 16:17:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:42.805 16:17:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:44.714 16:17:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:47.302 16:17:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:49.210 16:17:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:51.751 16:17:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:51.751 16:17:17 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:51.751 16:17:17 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:51.751 16:17:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:51.751 16:17:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:51.751 16:17:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:51.751 + [[ -n 5424 ]] 00:20:51.751 + sudo kill 5424 00:20:51.761 [Pipeline] } 00:20:51.778 [Pipeline] // timeout 00:20:51.783 [Pipeline] } 00:20:51.798 [Pipeline] // stage 00:20:51.804 [Pipeline] } 00:20:51.818 [Pipeline] // catchError 00:20:51.828 [Pipeline] stage 00:20:51.830 [Pipeline] { (Stop VM) 00:20:51.843 [Pipeline] sh 00:20:52.126 + vagrant halt 00:20:54.664 ==> default: Halting domain... 00:21:02.809 [Pipeline] sh 00:21:03.093 + vagrant destroy -f 00:21:05.634 ==> default: Removing domain... 00:21:05.647 [Pipeline] sh 00:21:05.932 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:05.942 [Pipeline] } 00:21:05.956 [Pipeline] // stage 00:21:05.961 [Pipeline] } 00:21:05.975 [Pipeline] // dir 00:21:05.980 [Pipeline] } 00:21:05.994 [Pipeline] // wrap 00:21:06.000 [Pipeline] } 00:21:06.013 [Pipeline] // catchError 00:21:06.022 [Pipeline] stage 00:21:06.025 [Pipeline] { (Epilogue) 00:21:06.037 [Pipeline] sh 00:21:06.323 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:10.541 [Pipeline] catchError 00:21:10.543 [Pipeline] { 00:21:10.556 [Pipeline] sh 00:21:10.841 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:10.841 Artifacts sizes are good 00:21:10.851 [Pipeline] } 00:21:10.864 [Pipeline] // catchError 00:21:10.875 [Pipeline] archiveArtifacts 00:21:10.883 Archiving artifacts 00:21:10.988 [Pipeline] cleanWs 00:21:11.019 [WS-CLEANUP] Deleting project workspace... 00:21:11.019 [WS-CLEANUP] Deferred wipeout is used... 00:21:11.030 [WS-CLEANUP] done 00:21:11.032 [Pipeline] } 00:21:11.047 [Pipeline] // stage 00:21:11.052 [Pipeline] } 00:21:11.065 [Pipeline] // node 00:21:11.071 [Pipeline] End of Pipeline 00:21:11.133 Finished: SUCCESS